

The Mission to Decentralize the Internet - ghosh
http://www.newyorker.com/online/blogs/elements/2013/12/the-mission-to-decentralize-the-internet.html

======
znowi
> On Monday, eight major tech firms, many of them competitors, united to
> demand an overhaul of government transparency and surveillance laws.

Not long ago they _united_ under the PRISM project to spy on their users. What
a quick change of heart.

And again, there's already a similar call for reform started by Mozilla months
ago (in June), which none of the PRISM companies supported. They were busy
releasing copycat reports denying everything back then.

[https://stopwatching.us/](https://stopwatching.us/)

~~~
salient
> Not long ago they united under the PRISM project to spy on their users. What
> a quick change of heart.

Yes, but then they did it for your safety, and because they care about you.
Now they're doing it because...well, they're starting to lose money.

~~~
amirmc
A slightly less cynical view: They were in the dark about who else was
involved and didn't fancy picking a fight with USG alone (and with an unknown
amount of public support). Now that things became public and they know who
else is involved, they can at least co-ordinate, knowing that at least one
segment of the public cares.

~~~
charonn0
It's always easier to oppress the man who believes he stands alone.

------
psc
While Google, Amazon, etc. being centralized might be bad in some ways (and
good in others), I'm less concerned with that kind of centralization (because
there are alternatives). The most dangerous centralization is at the ISP
level, because it's the first gateway to the internet. When I saw the article
title I was hoping it would be about that. Unfortunately, wired internet
connections make for a really good natural monopoly, so until we have super
fast wireless internet everywhere, I don't expect a solution. If any wifi mesh
net appears, it would be a good start, but I'm skeptical it would become
widespread.

The other dangerous centralization of the internet is the DNS system. I know,
technically it's decentralized because there even thousands of root name
servers. But politically it's centralized under ICANN. Domains are controlled
by registrars and governments. That's not necessarily a bad thing, but a truly
distributed system would help guarantee a neutral internet. There are some
good solutions for this already, namely Namecoin ([http://dot-
bit.org](http://dot-bit.org)), which uses proof of work and a bitcoin-like
blockchain as a P2P DNS. Namecoin uses the .bit TLD, but probably the largest
barrier to adoption is that it's not accessible by default on devices because
it's not ICANN sanctioned and therefore not in the root name servers. There
are workarounds for this. ICANN could add Namecoin to the root servers but
then it would still be under the control of ICANN. The true solution would be
for ISPs to add it to their DNS. If the ISP decentralization is solved, this
becomes trivial.

In my opinion, this is the only real way to guarantee net neutrality and
prevent censorship.

~~~
lukifer
I absolutely agree that ISPs are the nexus point for a truly decentralized
internet. I've honestly been surprised that ISPs haven't leveraged/abused
their oligopoly more than they have.

> If any wifi mesh net appears, it would be a good start, but I'm skeptical it
> would become widespread.

We do have cjdns, Project Meshnet and the Hyperboria network in Seattle. Still
pretty fledgling, though.

[http://en.wikipedia.org/wiki/Cjdns](http://en.wikipedia.org/wiki/Cjdns)

[https://projectmeshnet.org/](https://projectmeshnet.org/)

[http://hyperboria.net/](http://hyperboria.net/)

------
jokoon
Centralized systems are almost asking to be spying on, laws or not. Even if
the NSA does not have the right to do what it did, I'm sure they'd end up
doing by using corruption, and there are many ways to do it. Worse, it would
open the door to a lot of other shady practices, and I'm sure criminal would
make a buck out of it.

The fact that a huge quantity of information is put in one place increases the
risk it will peeked into by a lot.

Decentralized systems are a little more difficult to maintain and develop, but
are so much cheaper and eliminate the risk of mass surveillance. There are
still security risks depending on designs, but honestly I think decentralized
has a lot of future, the advantages and features it brings counter balances
the disadvantages by a lot.

I wish bittorrent inc would produce more technologies and some competitor
would really be cool.

~~~
salient
I agree. As long as big companies become so good at tracking and making money
through surveillance themselves, it's just too irresistible for spy agencies
not to try and get that data one way or another.

That's why companies need to at least use forward secrecy, keep data at rest
encrypted, and also delete more ephemeral data a lot more quickly.

Google used to delete the data they have on their users every 6 months. I
don't think they plan on ever deleting anything they have anymore now. That's
the wrong way to go about it. Tracking data should not exist for more than a
few months at most, and private communications should not be held for more
than a year. Of course, it would be so much easier for the user, if everything
was end-to-end encrypted, and then it wouldn't need to be deleted.

------
gwu78
The easiest path to decentralization is to either:

1 Give every user a publicly reachable IP address at their home or business,
or

2 Give every user an account with some remote organization that with a
publicly reachable IP address, where the user can be listed in some sort of IP
address directory. (Some users now use "dynamic DNS" for this purpose.)

I say 1 is better than 2.

The first big hurdle in the problem of decentralization is reachability; 30
years later, users of today's internet are not "directly" connected for two
way communication. By getting users past this initial hurdle of end-to-end
connectivity, it opens up the problem of centralization for all developers to
solve. The user can pick and choose her software to make direct connections to
other users, or she can choose none of it.

The www is a given. Every web developer assumes users can access the www.
Direct connections to other users should also be a given. And every developer
should be able to assume that users can connect to each other.

It's clear that users want direct connections to each other. They want to
communicate. The success of store and forward solutions like email over
several decades is proof. What's not clear is that they want (or need) a
middleman to broker their connections.

~~~
amirmc
I agree and I'm working on a tool that helps with this [1] but based on domain
names. Of course that still depends on the registrars (e.g Verisign) but we
believe it's easier for folks to remember names rather than numbers and then
_use_ the infrastructure. There's a paper we wrote that goes over the
architecture and examples, in case you're interested [2].

[1]
[http://nymote.org/software/signpost/](http://nymote.org/software/signpost/)

[2] [http://nymote.org/docs/2013-foci-
signposts.pdf](http://nymote.org/docs/2013-foci-signposts.pdf)

~~~
gwu78
".. we believe it's easier for folks to remember names rather than numbers..."

Yes, but regardless of whether they use names or numbers, they still cannot
get "public reachability" (a public IP address and allowance of unsolicited
ingress traffic) without either getting a publicly reachable IP address (ISP's
in the US call this a "business class" account) i.e. option 1 or somehow
involving at least one computer that already has public reachability i.e.
option 2 or 3.

At least, _I_ know of no other way.

If you know how to create public reachability without giving the user a
publicly reachable IP address or involving a computer that already has public
reachability, please develop and release the code; I think you are a
networking genius and have done the impossible!

In any event, my point is that it's probably better that the computer that
already has public reachability be owned and controlled _by the user_, not
some third party "provider". The user should only need one provider to
communicate over the internet: their ISP. I say just give users a publicly
reachable IP address: "upgrade" their account with the ISP to allow
unsolicited ingress connections.

A myriad of communications and data synchronization solutions will follow. I
reckon developers would love to write code for an end-to-end network. It would
certainly be more functional that one where eevrything must be done via HTTP
through a "web browser".

OCaml NaCl and dnscurve. Nice work! I hope your project succeeds!

~~~
PeterisP
ipv6 should bring everyone to publich reachability - eliminating the need for
NAT due to ip address shortage.

~~~
gwu78
Technically, it is not "NAT" that prevents what I am calling "reachability".
It is "firewalls".

------
pauljohncleary
I'm really surprised [http://tent.io](http://tent.io) has not been mentioned
here.

Tent is a protocol that allows users to pick and choose a datastore provider
for use with any tent compatible application. The provider can be centralised,
on the user's own infrastructure or anywhere in between.

Applications can be installed locally (without a dependency on communicating
with another server, other than the user's chosen datastore) or hosted on a
web server.

Users get to choose where (and control how) an application stores their data,
and developers don't need to think about infrastructure, user management or
authorisation when designing and building their app.

The protocol is at v0.3 and is not ready for the mainstream yet, but it's
coming, and (for me) has the potential to replace a bunch of web apps I
regularly use (think dropbox, gmail, github etc.).

------
the_watcher
Unfortunately, I've become sadly apathetic to online privacy concerns. There
are always going to be people willing to pay those with the skills to spy. In
the past, PI's are HUMINT were really the only ways to get "private"
information. The internet makes it easier, and allows it to scale massively
(the same characteristics that caused the internet to generate the massive
economic gains of the last 20 years). I've just shifted to assume that
anything that can be spied on is (semi) public. It's suboptimal, and sad, but
I've made the choice of trading privacy for convenience, since I'll never stop
using Google or Facebook or Twitter or Amazon. Or storing my credit cards and
using online banking. Until someone designs a better option (which I expect
will fight a constant battle against surveillance).

~~~
amirmc
I seem to come across this view more often and it saddens me. It doesn't have
to be all or nothing. For example, if someone really wants to break into my
house they probably will but that doesn't mean I'm not going to bother locking
my doors and windows in the future.

I'm in favour of making it harder for the seemingly casual surveillance to
take place using means that _already_ exist, like encryption and decentralised
systems. If more devs etc took that view and built things that were properly
secure by default then maybe things would be different. However, such systems
aren't really compatible with advertising-based business models and there's a
real trade-off in terms of speed to build (that's why - sometimes - 'security'
feels like it's slapped on at the end).

~~~
the_watcher
I think I agree with you. The locking the house doors and windows is a good
analogy. I'd love for it to be more difficult for the surveillance state. It
just seems like the safer way to proceed is to assume my digital life is
public (especially since I don't have anything in particular to hide). Just as
I lock my doors and windows despite the utopian society involving unlocked
doors all the time.

~~~
amirmc
If I were getting started on FB and social media now, I would also default to
'public' and treat posts accordingly. However, I first got involved when it
was private and I treated it very differently (this was before I became aware
of what I was really giving access to).

I'd also question your comment about having nothing in particular to hide.
Everyone has things they don't want to make public but perhaps you won't
appreciate what they are until someone exposes them for you? You're using a
pseudonym here but would you post the following information:

Who you are, where you live, how much you earn, whether you ever had any STIs,
the last time you told a little white lie (to whom and and about what),
whether you have kids (what school?).

If your reaction to the above is that it's none of my business, you'd be
absolutely right but if you really have "nothing to hide" then why not post
answers here for the world? Perhaps some of it is 'public' anyway and I could
dig up and post it for you but if I did, how would you feel? My point here is
that you _do_ have many things you don't want broadcast to the world and I'm
trying to provoke you into thinking about them. Privacy is a complex thing and
you might find the following article interesting.

[https://chronicle.com/article/Why-Privacy-Matters-Even-
if/12...](https://chronicle.com/article/Why-Privacy-Matters-Even-if/127461/)

------
mark_l_watson
Good article that hits on an important subject: decentralized privacy-
enhancing systems have a tough battle against easy to use consumer friendly
services from Google, Facebook, etc.

I have some hope that local grid networks will catch on. Also cheap appliances
that are easy to use and offer local cloud services may eventually catch on.

Not to sound too political, but this is the fight between large corporations
and/or financial elites firming up their control of _everything_. I am not
even sure how much personal effort I will put into these peoples' causes, and
I am a long time supporter of the FSF, EFF, and ACLU.

------
peterwwillis
Decentralized systems suck.

Look at any protocol or network service designed to be fully decentralized.
Inefficient routing, unstable communication, high latency and low bandwidth.
You have to do a lot more work to achieve things that would otherwise be
simple and straightforward.

Take human communications, for example. They're largely decentralized by
default, but we use centralized services to improve efficiency. If some piece
of information needs to be disseminated to a large population over a period of
time, we first record the message, then we post it in a centralized location
viewable by lots of clients[1]. Those clients can then go back and distribute
the information to its connected peers[2], but technology also allows us to
distribute subscriptions of this information to a wider population in a
variety of media[3]. ([1]stone tablets, [2]town crier,
[3]newspapers/television)

Modern people are afraid of centralization in the Internet because they are
suddenly realizing they have no real control over the content or medium. But
this is the case with all other forms of centralized mass communication, too.
People are also afraid of their lack of control over their data, which is
tantamount in these days to a form of property. Anyone would freak out if the
contents of their house were suddenly only available at the discretion of some
conglomerate!

But you don't need "decentralization" to keep a backup of your data. Most of
it originates from you anyway, and you can store it before you publish it.
Some service providers even provide methods to download all your data, though
obviously you can't rely on everyone providing you that option (unless someone
passes a law...)

On top of all this, the internet is already a system of decentralized networks
and services. You can just hook in your own services into the network at any
point therein and maintain them yourself. The only restriction is really by
the ISP, and there's lots of those to choose from in most developed countries.
Personally I think we should consider the internet a combination of utility
and public highway, since at this point we all need access, but we also need
competition due to the sensitivity of limits on network access.

~~~
amirmc
> _"... the internet is already a system of decentralized networks and
> services. You can just hook in your own services into the network at any
> point therein and maintain them yourself."_

Sure, but I'd argue we should be making it _easier_ for developers to build
more decentralised systems. Very few people seem to be working on tools and
OSS products at that level and that's because the problems are challenging (of
the kind that a startup wouldn't/shouldn't attempt - hence an active area of
work in university [1]). There are plenty of things that _require_
centralisation but I'd argue that just as many _don 't_ need to be centralised
but just end up that way by default.

[1] [http://nymote.org](http://nymote.org) (I'm working on these tools with
others)

~~~
peterwwillis
> _I 'd argue we should be making it easier for developers to build more
> decentralised systems_

Why?

~~~
amirmc
Ok, let's start with your points.

 _" Inefficient routing, unstable communication, high latency and low
bandwidth"_: If I have two devices (phone and laptop) on the same wifi network
(e.g. home or work) and I want to send something between them, why should it
first go via a cloud service? In my case, photo backups from my phone to my
laptop, which happens via PhotoStream at the moment, or files from my laptop
to my phone. If a route can easily be formed across the nearest AP then it's
shorter, more stable, probably higher bandwidth and certainly lower latency
than via the cloud. As the number of devices I own grows, this use case will
become more prevalent (think "Internet of Things").

 _" You have to do a lot more work to achieve things that would otherwise be
simple and straightforward"_: Sure, but you haven't asked _why_ this is the
case (and who 'you' is), which is just as important to examine. At the moment,
you _as a developer_ have to do more work in order to build decentralised
systems because not enough _free OSS tools exist yet_. The main reason that
using a central server is easier is because the tools, techniques and
knowledgebase have become 'standardised' but that doesn't preclude there being
_better_ ways of doing things. I'm advocating tools that make it easier to
build decentralised services and products.

Decentralised systems are more resilient than their centralised counterparts.
Any large-scale service/product ends up being decentralised behind the scenes
and if you don't believe me, ponder for a moment how the likes of Google,
Facebook et. al., manage to deal with so many users across the globe. They
_have_ to make their own tools to solve the same kinds of problems I'm
alluding to. I'm merely advocating that we try to 'productize' such tools so
that any developer can pick them up and incorporate them from the very
beginning. Real world examples of such things are how Skype worked in the
early days, Dropbox's LAN sync [1] and even Spotify [2], where P2P can improve
user experience and _reduce_ their bandwidth costs. I don't know how their
systems work but this clearly demonstrates the business value in P2P.

As you say yourself, _" the internet is already a system of decentralized
networks and services"_ but don't dismiss the work taken to get there and the
value that the lack of centralisation brings. Those same benefits can be
gained elsewhere provided the appropriate tools/infrastructure are made
available and useable.

Hypothesis: If there were FOSS tools that make it easier to create secure,
decentralised systems, more developers would create secure, decentralised
products.

[1] [https://www.dropbox.com/help/137/en](https://www.dropbox.com/help/137/en)

[2] [http://community.spotify.com/t5/Help-Desktop-Linux-Mac-
and/U...](http://community.spotify.com/t5/Help-Desktop-Linux-Mac-
and/Unadvertised-P2P-feature/m-p/400160#M46512)

~~~
peterwwillis
_> If I have two devices (phone and laptop) on the same wifi network (e.g.
home or work) and I want to send something between them_

Then you do a Bluetooth transfer, or CIFS/SMB file share, or FTP, or HTTP, or
IrDA, or USB, etc. Why would you use a cloud service?

Everyone involved has to do more work to support decentralized services.
Users, developers, admins, etc. The only thing you may not need more of is
infrastructure. Just because you've made it easier to build a service does not
mean the service is easier to use.

No, these products are still centralized. They are merely made more redundant,
fault-tolerant and highly-available. They may also be distributed, which is
different than decentralized.

I never said there aren't benefits to decentralized systems. As I said, they
can be useful. Centralized systems are just easier to create/use/maintain, are
more reliable, and are faster. Depending on the application.

~~~
amirmc
> _" Then you do a Bluetooth transfer, or CIFS/SMB file share, or FTP, or
> HTTP, or IrDA, or USB, etc. Why would you use a cloud service?"_

I suppose this is what you say to people who extol the virtues of Dropbox or
BitTorrent Sync? It almost sounds like you've completely missed the wave of
stuff that's happening, which I find hard to believe (or you're being
deliberately obtuse).

> _" No, these products are still centralized. They are merely made more
> redundant, fault-tolerant and highly-available."_

How do you suppose they are made redundant, fault-tolerant and highly-
available? I've tried to describe how in my previous comment and you haven't
given any counterpoints.

> _" They may also be distributed, which is different than decentralized."_

Ok, this made me think as I tend to use the words interchangeably. Some quick
Google searching doesn't clarify anything either (I find conflicting
versions). Ovearall, I get the feeling that you have a different view of what
'centralised' means but even then, you haven't given any examples of why such
things are more reliable or faster, whereas I've tried to do so.

~~~
peterwwillis
'The wave of stuff that's happening' is not some crazy new paradigm shift.
It's remote network services. Why you would use a remote network service when
all your files are available locally, I have no idea. People are weird.

The easiest way to think of this is in terms of network architecture. A
Client-Server model of network architecture is by definition centralized; you
have one or more clients, and only one server. The client connects to the
server to do whatever it wants. In a decentralized model, the server is any
node with the ability to serve the requests of the clients - which often
includes other clients.

It would take too much time to explain how all of that stuff works, but
suffice to say that just because there's more than one computer involved or
more than one process involved does not make it a decentralized service. If
the client is still just talking to one thing (on the frontend), it's
basically centralized.

To explain how centralized architecture is faster, consider a simple example
where there is one client, and three nodes distributing a file. The client has
to first find and identify the nodes, which usually takes more time than just
resolving a single server, because it has to "search" for its peer nodes. Then
it has to request a file, which is probably split up into pieces and little
requests are sent to each node. Each node then sends the client the pieces.
The client then reassembles them on its end to create the finished file.
Compare this to a server, where from lookup to file transfer, there is one
continuous operation; there is no need to communicate with multiple nodes or
track multiple connections or piece together files. Also with the distributed
model, you are at the mercy of all the nodes in the network, not just one, and
all of the different networks those nodes might be on. You also may end up
doing 3x the network operations, depending on protocols, chunk sizes, etc.
Because you can't bet on the source of the file coming from one reliable
place, network latency may go up, and bandwidth may also go down - in practice
over the internet this is virtually guaranteed vs a central server's large
pipe.

