
Locking the Web Open: A Call for a Distributed Web - hachiya
http://brewster.kahle.org/2015/08/11/locking-the-web-open-a-call-for-a-distributed-web-2/
======
schoen
I'm happy to see this article, and it reminds me of things that others have
been talking about for some time (for example, the "Redecentralize"
community).

I've participated in some file-sharing litigation which has made it very clear
to me that decentralized P2P systems are not inherently more anonymous than
other technologies. In fact, there's a cottage industry of P2P monitoring
companies that participate as peers in the P2P networks and record detailed
information about the IP addresses of peers that uploaded and downloaded
particular files. There are often paradoxes where decentralization helps
privacy and anonymity in some ways but harms it in others -- for example, if
you run your own mail server instead of using Gmail, then you've prevented
Google from knowing who communicates with whom, but allowed a network
adversary to learn that information directly, where the network adversary
might not know the messaging relationships if everyone on the network used
Gmail.

I guess a related point is that information about who is doing what online
exists _somewhere_ by default, unless careful privacy engineering reduces the
amount of information that's out there. Making the simplest kinds of
architectural changes could just shift the location where the information
exists, for example from Google or Yahoo or Amazon to dozens of random
strangers, some of whom might be working for an adversary.

~~~
api
The only mechanism I'm aware of that truly allows anonymity over your own
connection (or a connection that can be tied to you) is onion routing. On top
of that, you must do it from a separate device or isolated VM to prevent
hardware fingerprinting.

Anything less than that is like using snake oil crypto: it might make you feel
good, but it's not really there.

~~~
phkahler
>> The only mechanism I'm aware of that truly allows anonymity...

We have a need for both solid anonymity and zero anonymity. I think the first
step is to be able to authenticate whom you are communicating with, and to
reach them without a central authority. After that, you can choose to strip
identifying information, or build a web of trust, or anything else. I think
privacy can be built on top of an authenticated net, but the reverse is
probably not possible. Today we have neither.

~~~
frio
For a long time, I've thought the phrase we want is "strong pseudonymity".

------
Animats
Kahle's approach works only for static content. It's not hard to distribute
static content; BitTorrent does it just fine. The Internet Archive stores
static content. Kahle thinks in terms of static content, because that's what
the Internet Archive does. But it's less of the Web today. Despite that, it's
good to have a way to distribute static content. Academic publishing, after
all, is almost all static content. That should be widely distributed. It's not
like academic journals pay their authors.

There's the problem that distributing content means someone else pays for
storing and serving it. This is part of what killed USENET, once the binary
groups (mostly pirated stuff and porn) became huge. There's a scaling problem
with replication.

Federated networks are interesting, and there are several federated social
networks. A few even have a number of servers in two digits. You could have a
federated Facebook replacement that costs each user under a dollar a month at
current hosting prices. No ads. The concept is not getting any traction.

Kahle wants a system with _" easy mechanisms for readers to pay writers."_
That's either micropayments or an app store, both of which are worse than the
current Web.

~~~
gglitch
Why are micropayments worse than the current web? People have differing
opinions on the advertising-pays-for-content-so-don't-block-it issue, but are
you referring to something technical?

~~~
Animats
No, just the general failure of the pay-to-read model. Other than the New York
Times, the Wall Street Journal, and the Economist, few general publications
with a paywall make money. They all have large, worldwide reporting staffs.
Nobody is going to pay to read your blog.

Pando Daily is trying pay-to-read. It's too soon to tell how that will work
out.

~~~
michaelchisari
The NYT/WSJ/Economist model is too exclusionary and the pricing isn't ideal.

I'm watching Blendle closely, because their model of micropayments for news
content, mixed with no-questions-asked refunds I think could be huge.

------
wwwtyro
I'm all for this, and consider it to be inevitable in the long run. In the
short term, however, it seems like the major hurdle will be getting one of
these projects into the mainstream: for the most part, the web already does
what most people want it to do, and those people aren't going to be bothered
to install a new web browser so that they can do things they're already doing.
Especially if it lacks the features, performance, or ease-of-use of their
current browser.

So, how do we address this? Is there a "killer app" for the distributed web
that will motivate people to move to it? Can we use existing web tech like
Web-RTC to bootstrap the system? Maybe a workable avenue is mobile, where
people are pretty comfortable installing new applications - what if we built
the next social network into an app based on the distributed web?

I don't know the answer, but I'd love to hear any ideas/brainstorming you
clever people have to offer.

~~~
lsjroberts
You may be interested in
[https://ind.ie/about/vision/](https://ind.ie/about/vision/)

~~~
molsongolden
[https://indiewebcamp.com/](https://indiewebcamp.com/) is a bit more
interesting and open.

------
omouse
That's been the goal of the Freenet project for a while, to build a
distributed encrypted network protocol. It distributes storage and processing
which is why full encryption is necessary; you don't want 10 people reading
your email when it's distributed across their machines.

The challenge for Freenet has been speed and fun. To have something like
Facebook you have to download a JAR plugin for Freenet that adds that
capability. That's not fun. The speed is slow because of the encryption and
constant syncing.

It might be better to look at the MediaGoblin and Pump.io (and StatusNet to
some extent) for ideas on federated platforms. The challenge there again is
fun; it isn't fun to set things up.

~~~
heywire
Am I the only one who is scared to even try things like Freenet and Tor, for
the risk that someone will somehow transmit illegal content across my
connection? I'm not talking about content piracy, but the types of things that
I don't even want to type for fear of having the terms associated with my
username.

~~~
omouse
First...IANAL (I Am Not A Lawyer)

You aren't the only one, but with Freenet it's fully encrypted. Let's say you
had a Freenet Silk Road application. You won't know it's a Silk Road web page
that's being saved along with images of marijuana to your computer unless you
go through an indexer/search site and even then you still won't know that
those bits of data are stored specifically on your machine.

So in order for the cops to know your machine was used to store the drug
listings, the cops would have to spy on your machine and crack the encryption
of the Freenet protocol and essentially monitor it. This is why undercover
work is important to the police. If no one reports you for the crime of buying
drugs and no one discovers the drugs in transit, then the police don't know
what's happening. The only way to catch mobsters was through some undercover
work and hoping that someone in the criminal network would squeal. If one
criminal says the other 10 criminals actually had a hand in committing a
crime, the police have more to investigate and can build a case.

If you're not buying drugs or selling drugs and the data related to the drug
listings is encrypted when stored on your machine and encrypted when served
from your machine, you may be unknowingly helping a criminal to buy or sell
drugs. But I'm not sure how that's discoverable by the police and I'm not sure
how it would be turned into a criminal investigation. By the argument that
you're allowing this, then all ISPs and cell providers are in big trouble
because they also enable drug dealing.

There are some horror stories for Tor node operators though.

~~~
nekopa
Not talking about the legal aspects, but, (and I really, _really_ hate to
bring up the "think about the children!" argument here) what about if I am
unknowingly helping people who create and share child porn?

It doesn't matter (to me) if I am on the hook for it or not, I just don't know
(ethically?) how I would feel if I knew that was going on via my PC. Drugs I
don't give a shit about, and I hate how the "think about the children" people
screw our rights to privacy, but still...

Honest open question.

Edit: PS, I want you guys to keep doing what you're doing. I completely
believe in an open free web, and I want to play my part... I hate the idea of
the open web turning into a bunch of mini AOLs... Which is where we seem to be
heading at the moment.

~~~
joesmo
With a fully distributed model, everyone is essentially running a backbone
server. If you don't feel comfortable with such an arrangement, then you'd
probably have to opt out. There are plenty of people willing to put up with
it, evidenced by the number of companies who have no problem operating the
current Internet backbone despite knowing for a fact that their networks are
used to distribute child porn and other illegal things. Generally, I think the
law is on the side of the distributor, but that's only a legal consolation,
not a moral one. I just don't see a way around it. To be fair though, the
likelihood of this happening is probably going to be much lower than for a
backbone provider, especially if users only serve up content they've consumed
themselves (seems like a logical assumption in the distributed system I'm
thinking about, but does not have to hold true for all such systems).

~~~
nekopa
I do feel comfortable with being a part of a global backbone server, and
you're right, I don't see a way around the moral issue. But I do see a light
at the end of the tunnel in the idea of serving what I consume, that is an
interesting take on it. But that does also bring up the issue of creating
walled gardens (ie niche bubbles) that may not foster the type of Internet I
want to see in the future...

For _me_ it's a thorny issue just because of recent articles I've read about
the anonymous services really allowing molesters to even set up 'dating'
services. As the post above argues about utility outweighing the bad, I want
the future great possibilities of an unhindered Web, but I don't want the
unheard of new ways people can abuse children. (I've stated before that there
are a lot of crimes I don't care about too strongly, but this area gets me,
mainly because I have small kids right now)

I guess I just want to have my cake and eat it...

------
dweinus
I want this for all the reasons they list, but it seems there are huge
unanswered questions for anything beyond a permission-less static page.
Imagine you are developing a modern web app in the locked open paradigm. Is
all system data distributed, including private user data and passwords? The
only solution I can come up with is homomorphic encryption, which is not
performant enough and still probably leaves a huge timing/structure analysis
attack area if anyone can download the database. If I make any mistakes on the
database security, the entire DB is already pre-leaked to the world? The final
dencryption/encryption happens in client javascript, which is a whole other
hornets' nest. Besides that, the implication is that I write my entire system
stack in client javascript that is exposed to everyone, including any
proprietary algorithms or credentials? Even if that was ok, and the system can
live in the user cloud, where does system processing that is independent of
user activity (scheduled tasks, etc) happen? Again, I want all of these
problems to be solved, but they are nontrivial.

~~~
kodablah
Point by point...(sorry for the long post)

"homomorphic encryption, which is not performant enough"

It is fast enough on a per viewer basis, and in a DHT downloading the database
doesn't mean it was all encrypted w/ one key. Each user encrypts his data as
needed, or common groups of users encrypt data for each other with each others
keys.

"If I make any mistakes on the database security"

This is why encryption is the underpinning. Sure you can still leak your
private key like you can leak an SSH key today.

"in client javascript"

Nobody would use a distributed network where this was the case. In many cases
(i.e. MaidSafe) they are developing a browser plugin for client side to
communicate with the backend.

"where does system processing that is independent of user activity (scheduled
tasks, etc) happen?"

Many of these now-being-designed systems have a pay-for-computing concept.
Granted several (not all, unless you want to be limited by a single-file-line
blockchain forever) have to agree on the results. Give some computing for
other computes and get some. As for "scheduled task" timing issues are
inherently difficult for these systems and I don't expect the "system" to
trigger a job but rather a user to trigger it. Introducing timing into these
distributed networks can be hairy.

The real problem that needs to be tackled is a way for the common human to
hold his private key in his memory or some other non-digitally-retrievable
way.

~~~
dweinus
Thank you for the thoughtful responses! I am still getting my head around some
of this, so I love hearing solutions I have not thought of.

"common groups of users encrypt data for each other with each others keys"

I agree, but I think this can quickly lead to massive multiplication of data
without careful cryptographic gymnastics. It puts more pressure on the
application devs to do it right or more pressure on the network in terms of
data if you don't.

"Sure you can still leak your private key like you can leak an SSH key today."

If I leak an SSH key, I can revoke it and only data that attackers have
already grabbed is out. In the described paradigm, everything is already out
to everyone. It is all or nothing. That might not be a difference from a
theoretical point of view, but in practice it is.

MaidSafe is very interesting, thank you! It seems like more of a shared cloud,
which is halfway between present cloud computing and the completely
distributed utopia described in the article. It solves pretty much all of
these issues, with the cost of being a less-centralized network rather than a
fully distributed network. Awesome work, I hope they succeed!

~~~
kodablah
You can also change any sensitive data you have. Also, the distributed/open
web should not be one without moderation, just without mandated moderation. If
I wrote a distributed social network, I would allow the user to choose a
moderated "room"/"group" if he wished. This can facilitate deletion of items,
but in many distributed systems, they are never deleted anyways. Be it a
mostly immutable DHT or the "right to be forgotten" or whatever it is, in
decentralized systems you cannot tell people what to do with data you put out
there, you can only encrypt it. IMO, we'll still need the public auditable web
for acts requiring responsibility for security failures. Users cannot be
trusted with their own security nor can they be trusted to determine a bad
actor from a good one.

MaidSafe is fully distributed. Each user is a node (i.e. "vault" or "persona"
or whatever the proper name is).

------
LukeB42
I've been thinking of how to decentralise the web as-is since 2011, the
current development branch for this perspective on it is here:
[https://github.com/LukeB42/Uroko/tree/development](https://github.com/LukeB42/Uroko/tree/development)

It's basically a collaborative caching proxy.

One process is a proxy that can also coordinate multiple users editing the
same page and a subprocess acts as a DHT node.

You can use a raft-like log of hashes of pubkey,content and the previous hash
to keep a history of edits in the network.

the hard part is this: How do you trust the validity of a singular node having
a url you're requesting?

It entails a rating system, and then it becomes the byzantine generals problem
where the overlay network should be able to tolerate up to a third of its
malicious nodes saying they're all trustworthy.

Feedback/any help would be much appreciated.

~~~
notduncansmith
So the "log of hashes of pubkey,content and the previous hash" is conceptually
similar to a blockchain, and I think reading into how that works (consensus,
trust, etc) would give you some insight into the issues you're describing. You
may also find the IPFS project of interest: [http://ipfs.io/](http://ipfs.io/)

~~~
LukeB42
I've found IPFS very interesting and reccommended it to peers, but it lacks
the collaborative editing aspect.

Trusting the initial public offering of a resource is still an interesting
issue. IPFS is content-addressable by hash, addresses map to their content in
a computable way.

The idea for the distributed hash table in Uroko is that the keys are existing
URLs. Imagine thousands of peers all saying they have a new page on the domain
"google.com" and you can see what makes this a fun problem to solve.

~~~
notduncansmith
> Trusting the initial public offering of a resource is still an interesting
> issue.

As you mention, resources on IPFS are addressed by the hash, so I'm curious
what you mean by "trust" here - do you mean that you can't trust the
accuracy/validity of the content? I would assume that content on this kind of
network is signed by the publishing party, so if the signature checks out
against your PKI, you can trust the content.

I'm also curious as to why Git (or a spiritually similar adaptation) doesn't
fit the needs of what you have in mind. Come to think of it, I don't think I
see the use-case for Uroko - would you mind explaining?

~~~
LukeB42
Yes and thank you for your question. I mean the accuracy of the content. As
users of a web that's been embedded in a distributed hash table where URLs are
the keys and revisions of content are possible values, we will want some peace
of mind if an organised party configured nodes to insert advertisements, for
example. This means all nodes have an altruism score associated with their
public key, and the system being designed to help a node perform a distributed
summation of a node's altruism score, producing a positive or negative total.
It means having some way of verifying if content you received was good, and
some way of rating your act of rating, based on your altruism score.

Another thing is that for obvious reasons initial tests of the network ought
to redact all <script> tags from documents before they're ever sent to a
browser. It also means manually implementing same-origin policy due to the
proxy address being the origin of every script being served to you.

Uroko intends to be a spiritually similar adaptation of git. If you look at
the models.py the concept is based on revisions that belong to a path, which
belongs to a domain. Think of them as commits on a branch belonging to a
project.

So it is a spiritually similar adaptation of Git, but Git isn't an overlay
network, which gives us an addressing scheme to identify nodes independent of
their ipv4 or ipv6 addresses (Kademlia gives a routing scheme with a possible
2^160 node IDs..), message rebroadcasts, pings, helping peers bootstrap in and
transmitting peers you know of in every message, tolerating node failures and
ensuring no one is left out of what is an ad-hoc system built on an ad-hoc
peering arrangement has been demonstrably well served by the use of this sort
of overlay network.

Also Git is not an HTTPD. Uroko is, and a design goal is to support users
simultaneously collaborating on the same document in soft real-time. You
should be able to synchronise your cache of the web with friends directly,
edit over the lan/vlan, and generally keep popular sites available to nodes in
your overlay routing table.

------
dikaiosune
I would love to live in this future -- but where's the incentive for
businesses? How do they make more money developing in this way? How do users
get more value accessing sites developed in a purely decentralized fashion?
How do we avoid JavaScript being the basis for all of this?

Interesting (almost exciting) vision, but I don't see why the majority of
existing users would move. They just don't get much value out of privacy,
versioning, reliability, etc. They get _enough_ of those things out of Gmail,
Facebook, et al for their purposes already.

~~~
astazangasta
Who cares? The web preceded the Internet company. I remember, it was a great
place full of interesting people and things, not shitty content mills crammed
full of ads. A system like the one described means that ordinary people can
build the web again, which is great. Businesses can continue as they are.

~~~
rgbrgb
Ordinary people can and still do build the web. Why would decentralization
allow for more of this? It's cheaper and easier than ever to spin up a
heroku/do/aws/google/azure instance and put up a website. Things like
squarespace and weebly even make it so you don't have to do any programming
whatsoever.

In theory I want this decentralized web stuff to succeed but in practice the
only killer apps I see are overthrowing governments and kiddy porn. I'd be
happy to be proven wrong. From where I stand, decentralization seems like more
of a social/product problem than a technical one. If you prove there's a
product that end-users want that can't be built or accessed from the current
web, people (end-users and developers) will switch.

------
vezzy-fnord
The author's proposals are strikingly similar to Xanadu, right down to
suggesting embedded micropayments:
[https://en.wikipedia.org/wiki/Project_Xanadu#Original_17_rul...](https://en.wikipedia.org/wiki/Project_Xanadu#Original_17_rules)

Ted Nelson will rise. Sort of, not really.

------
sebastianconcpt
Well in this regard, I'm pretty mindblown by the possibilities of
[https://ethereum.org/](https://ethereum.org/)

------
EGreg
I think it's obvious that the current web is decentralized, but is heavily
server-based. At the same time, there is something about propagating
applications across these servers... Russia can ban Reddit but they can't ban
Wordpress. For the moment, that is what we are working on at
[http://platform.qbix.com](http://platform.qbix.com) (and have been for the
past 4 years). Making it easy to have a distributed _social_ network the same
way bitcoin makes _money_ distributed.

Now, how would you take it further and make the web entirely peer to peer, so
you wouldn't have to trust servers with your _security_ and _politics_? You
can have additional schemes like http and https, for various methods of
delivery and storage.

I wrote this FIVE years ago but nothing seems to have been done about it since
then:
[https://news.ycombinator.com/item?id=2023475](https://news.ycombinator.com/item?id=2023475)

That would be an easy first step, that would do a lot. It's 2015 and we can't
even have XAuth ([http://techcrunch.com/2010/04/18/spearheaded-by-meebo-
xauth-...](http://techcrunch.com/2010/04/18/spearheaded-by-meebo-xauth-looks-
to-make-social-sites-smarter/)) in the browser! (We would need a space for
storing preferences where websites from any domain could read what was
written.)

~~~
jeena
> Russia can ban Reddit but they can't ban Wordpress.

That's why I'm part of the IndieWeb
[https://jeena.net/indieweb](https://jeena.net/indieweb) or
[http://indiewebcamp.com/](http://indiewebcamp.com/)

The nicest thing about that all is that I don't need to wait until someone
else writes a whole new WWW, my own website already is a small part of the
whole big thing, I just make my HTML more machine readable and implement
something like pingback (but easier, it is called webmentions). With this
small building blocks I am, together with others, building a social network
which we don't even need to call that.

~~~
EGreg
IndieWeb looks great. I am going to try to get involved with it.

Could you please get in touch with me by email? You can find it at
[http://qbix.com/about](http://qbix.com/about) \-- just click on "contact". I
would like to find out more about this movement ... I'm beginning to
participate more in the Offline First, Distributed Web, Mesh Networking and
other such movements. Our company's spent 4 years building a platform that
would decentralize social networking, because we see it as the catalyst to
giving users control of their own data. Most people in the world are just
using centralized services these days, and it's directly related to how
difficult it is to make a seamless social layer for the web. So I think that
we're solving a solution parallel to what bitcoin did with money. A good
solution unleashes new possibilities, like the Web itself did, like Email did.

Anyway, reach out if you can! - Greg

------
Titanous
I really wish that the Internet Archive would provide bulk access to the
Wayback Machine dataset. It would allow for a lot of interesting
experimentation and research.

~~~
nekopa
Is that even possible? I don't know the latest size of the IA, but it must be
ridiculously huge by now, (1 billion pages a week added) bandwidth cost would
be massive.

Maybe they could offer a mail-us-a-multi-petabyte-hdd service... Returned a
few weeks later full of data :)

~~~
Titanous
It's totally possible, they already have the infrastructure in place and 14PB
of data available for download. Unfortunately the Wayback Machine data is not
currently exposed publicly.

~~~
nekopa
Why do you think that is? It seems like they are really open with most of
their stuff, so why haven't they exposed the wayback with an api?

Then again, wouldn't it be pretty trivial to scrape?

(I say this as I'm working on a hellish scraping project, and the wayback
machine seems like it would be a walk in the park to scrape)

------
basicplus2
If every one had a wireless node on their house you might get an open web..
Just leaving the problem of connecting to the backbone

~~~
slxh
too bad Google blocks adhoc WiFi on Android... that might have helped too.

------
nopcode
Just replacing DNS with a decentralised alternative would be a big step, yet
appeared to be an impossible one (zookos triangle).

Something this big requires everyone using the internet to switch to the new
system or it will never work, and that will never happen. It's the dancing pig
problem.

We can't go back on the decisions that have been made, only go forward.

------
charbz
Data synchronization and Memory management is a major flaw in the concept of a
distributed web as described by this article. Is the author suggesting taking
all application data that exists on all web servers today, and hosting it on
each device connected to the network (billions of devices) ?

------
Sleaker
I was liking what the article was suggesting but then it shamelessly plugged
BitTorrent inc. which is one of those 'big companies' you don't want touching
anything related to privacy or freedom.

------
dubwubz
It's too late for that. What was once the internet is now basically glorified
cable TV. At this point, it's pretty much inevitable that it's going full
Disney or bust.

Hopefully bust.

------
williamcotton
It's really exciting to see the these various technologies coming together!

We're working on the micropayments for authors and rights-holders aspect of
this:

[https://github.com/blockai/openpublish](https://github.com/blockai/openpublish)
[https://github.com/blockai/bitstore-
client](https://github.com/blockai/bitstore-client)

------
bobajeff
I agree with one of the comments on that page that decentralizing the Internet
is fundamental to decentralizing the Web.

So in my mind the problems that need to be solved are:

Information-Centric Networking > Unstructured Mesh Networking > Distributed
Data Storage > P2P Information Retrieval

------
jokoon
Cool, that's a nice way of promoting those technologies since many don't
understand them.

I wish those things would land on the IETF board. I wonder what snowden think
about those. I would surely make things much harder for the NSA to do massive
surveillance.

------
saintx
A good roadmap for a new distributed web should be broken down by OSI model
layer, showing what protocols and technologies exist that need to be replaced,
what levels of the OSI model they span, and identifies single points of
failure lower in the stack that must be accommodated. Too few people
understand how brittle the web is by its reliance on the "magical"
underpinnings of the Internet continuing to "just work".

For example, let's say we want privacy, anonymity and high availability for
something fundamental like name lookups. It's not enough to simply replace DNS
with namecoin (L7), if there's a critical vulnerability in openssl on linux
that could force a fork in the network, possibly leading to existing blocks
getting orphaned (L6), if every single session that goes through AT&T gets
captured, and the corresponding netflow stored in perpetuity for later
analysis and deanonymization (L5), if this application's traffic could be used
for reflection amplification attacks (L4) due to host address spoofing (L3).
One might try to get around those issues by direct transmission of traffic
between network endpoints (asynchronous peer-to-peer ad hoc wireless networks
via smartphones or home radio beacons, for example), but then you not only
need to deal with MAC address spoofing and VLAN circumvention, (L2) but with
radio signal interference from all the noisy radios turned up to max broadcast
volume, shouting over one another, trying to be heard (L1) and accomplishing
little more than forcing TCP retransmissions higher up in the stack.

And really what's the point, when you can't even trust that the physical
radios in your phone or modem aren't themselves vulnerable to their
fundamentally insecure baseband processor and its proprietary OS? Turns out,
what you were relying on to be "just a radio" has its own CPU and operating
system with their own vulnerabilities.

Solving this from the top down with a "killer app" is impossible without
addressing each layer of the protocol stack. Each layer in the network
ecosystem is under constant attack. Every component is itself vulnerable to
weaknesses in all the layers above and below it. Vulnerabilities in the top
layers can be used to saturate and overwhelm the bottom layers (like when
Wordpress sites are used to commit HTTP reflection and amplification attacks),
and vulnerabilities in the lower layers can be used to subvert, expose, and
undermine the workings of the layers above them. The stuff in the middle
(switches) are under constant threat of misuse from weaknesses both above AND
below.

It might be tempting for an app developer to read this blog post and think "Oh
wow, what a novel idea! Why is nobody doing this?" But in reality, legions of
security and network researchers, as well as system, network, and software
engineers around the world toil daily to uncover and address the core
vulnerabilities that hinder these sorts of efforts.

------
acd
To enable p2p web Use ipfs or morph.is

