
A plan to rescue the Web from the Internet - staltz
https://staltz.com/a-plan-to-rescue-the-web-from-the-internet.html
======
romaniv
The reason the Web needs rescuing is because it's not a particularly well-
designed system that has been patched over and over again for the last quarter
century. And now it has been degraded to a delivery layer for JavaScript apps,
a poor RPC protocol and an overly complex UI rendering toolkit.

It should have had protocol-level technologies for preserving historic data,
for search, for offline use, for authoring and publishing. If you look
closely, cloud services created tools to easily do all those things and that's
how they got in control.

 _" The Internet was done so well that most people think of it as a natural
resource like the Pacific Ocean, rather than something that was man-made. When
was the last time a technology with a scale like that was so error-free? The
Web, in comparison, is a joke. The Web was done by amateurs."_ \-- Alan Kay.

A lot of web devs were enraged by his comment without listening to the
context. He was talking about the lack of protocol-level solutions for
_resilience_ of the Web.

~~~
carussell
> It should have had protocol-level technologies for preserving historic data,
> for search, for offline use, for authoring and publishing. [...] He was
> talking about the lack of protocol-level solutions for resilience of the
> Web.

That's completely at odds with what Kay actually advocates for. His position
is strongly aligned with the "mechanism, not policy" philosophy. That is, what
he advocates for is _less_ of what you want, not more. This is apparent from
that Dr Dobbs interview, his work with VPRI, back to his OOPSLA speech in
1997. To wit:

> HTML on the Internet has gone back to the dark ages, because it presupposes
> that there should be a browser that should understand its formats. [...] You
> don't need a browser if you followed what this staff sergeant in the Air
> Force knew how to do in 1961. Just read it in, it should travel with all the
> things that it needs, and you don't need anything more complex than
> something like X Windows.

Kay has always sat on the diametrically opposite site of the table as those
favoring the Principle of Least Power—one of the most underappreciated tenets
of computing. Tim Berners-Lee called it out in Axioms of Web Architecture:

> At the other end of the scale is the weather information portrayed by the
> cunning Java applet. While this might allow a very cool user interface, it
> cannot be analyzed at all. The search engine finding the page will have no
> idea of what the data is or what it is about. The only way to find out what
> a Java applet means is to set it running in front of a person.

You can get a better view of what a world that more closely adhered to Kay's
vision would look like in Stanislav's (an adherent's) post, "No Formats, No
Wars": [http://www.loper-os.org/?p=309](http://www.loper-os.org/?p=309)

~~~
romaniv
Having watched nearly every talk/interview of Kay I could find on YouTube
(which includes the OOPSLA keynote) and having read some of his writings I
think I have a reasonably good understanding of what he is advocating for.

Here is Kay talking about Internet's resilience in a bit more detail:

[https://youtu.be/NdSD07U5uBs?t=1472](https://youtu.be/NdSD07U5uBs?t=1472)

 _> His position is strongly aligned with the "mechanism, not policy"
philosophy._

I don't see how this opposes or contradicts any of what I wrote above.

~~~
jasode
_> and having read some of his writings I think I have a reasonably good
understanding of what he is advocating for._

In the Dr Dobbs interview[1] that you pulled the quote from, Alan Kay is not
talking about protocols for web resilience. Actually, there is no mention of
"protocols" in that article at all.

Even outside of that particular article, AK doesn't talk much about
"protocols" other than the protocol of "message passing" in Smalltalk which is
a different meaning from "internet protocols".

Tim Berners-Lee is a figure that speaks of new protocols for decentralization,
etc. That's not Alan Kay's angle. If you think AK is imploring the tech
community for new internet protocols, you need to cite another AK source other
than that Dr Dobbs interview.

 _> [Alan Kay] was talking about the lack of protocol-level solutions for
resilience of the Web._

That's not what he's talking about. He's complaining that the "web" did not
make it easy for people to _do computing_. One example in that interview was
Wikipedia page on Logo programming language not being able to execute Logo
programs.

Kay's criticism of the "web done by amateurs" looks to be the same train of
thought about his ideal Dynabook enabling people to _participate in computing_
rather than something like the Apple iPad where people just read news or watch
videos.

[1]
[http://www.drdobbs.com/article/print?articleId=240003442&sit...](http://www.drdobbs.com/article/print?articleId=240003442&siteSectionName=architecture-
and-design)

~~~
romaniv
Just because Kay criticized the Web for its limits on user participation does
not mean he did not criticize it for other reasons.

Here is an entire talk on complexity and robustness in software where the Web
is mentioned as one of the worst offenders of design by accretion:

[https://youtu.be/QboI_1WJUlM?t=1046](https://youtu.be/QboI_1WJUlM?t=1046)

(I'm linking to a specific part, but the entire thing is worth listening to.)

Also, he does talk about protocols. Heck, VPRI implemented a TCP/IP
implementation by treating it as a language and using a non-deterministic
parser.

[http://www.vpri.org/pdf/tr2007008_steps.pdf](http://www.vpri.org/pdf/tr2007008_steps.pdf)

Page 17.

------
dash2

      Here’s the problem with IP addresses: there aren’t enough of them....
      As a consequence, the Internet has allowed intermediate
      computers to rule. These are like parasites that have grown
      too large to remove without killing the host. The technical
      flaw that favored intermediate computers prefigured a world
      where middlemen business models thrive.
    

The handwave is the word "prefigure". How did IPv4 and NAT play any role in
the dominance of Facebook, Airbnb et cetera? This is an analogy masquerading
as an argument.

    
    
      It is not fundamentally necessary to have any intermediate 
      company profiting whenever a person reads news from their 
      friends, rents an apartment from a stranger, or orders a 
      ride from a driver.
    

The author provides no evidence that the services of Airbnb, Uber etc. have no
value added. These companies carefully designed interfaces to help us find
what we need. If they did not add value, we would still be using newsgroups.

~~~
schnable
And this is important because the author misses the point as to why people get
value from the "middlemen" \-- to find stuff. Even with a content-centered
web, there will be a need to find content and people, and new discovery
engines and aggregators will emerge.

~~~
at-fates-hands
I was struck with the same thing.

There is a reason we moved from newsgroups, to HTML and browsers to search
engines. Each step became a better, more powerful tool to find content faster,
more efficiently.

These days so many of the tools we're using are so entrenched, even with some
of the changes the author talks about, it would still be a monumental task for
people to change.

I mean, look at how bad FB is (on several levels) and look at all the
alternatives which allow you to keep your information private and share what
you want like Diaspora or Freenet. I tried (repeatedly) to get friends and
family to join my Diaspora pod and tried to convey all the advantages of it
being private. Nope, nada, no way. None of them ever joined and many just said
FB has such deep roots into their lives, giving it up would have a huge effect
on their lives, it was crazy.

~~~
gregknicholson
> FB has such deep roots into their lives, giving it up would have a huge
> effect on their lives

Then our task is to persuade them that the huge effect would be positive!

This reads like an episode of Black Mirror, only scarier because it's not
science fiction: [https://coolguy.website/writing/the-future-will-be-
technical...](https://coolguy.website/writing/the-future-will-be-
technical/present-day-party.html)

~~~
freeflight
> Then our task is to persuade them that the huge effect would be positive!

How so? What makes Facebook so appealing is the fact that pretty much
everybody is on there, just like a telephone book. As such an alternative,
with a huge positive effect, has to deliver at least this "centralization" to
fulfill the same needs Facebook has been serving until now.

This is a pretty common theme around IT savvy communities: People complain
about Facebook and how there are so much better alternatives and how you
supposedly don't even need Facebook. But the reality for over 2 billion people
(among them many friends and family members of said tech-savvy folk) out there
looks quite different, they use Facebook because it's what everybody uses, it
wouldn't even work without that, which is what most Facebook competition comes
to realize sooner or later.

I just don't see any good solution to any of this that doesn't involve some
paradigm shift how we handle private information as societies and as
businesses.

~~~
gregknicholson
What if a lot of your friends and family are on it, and some interesting
people you don't know yet are on it, and your favourite brands… _aren 't_?

What if our secret advantage is that _not_ everyone is on it?

~~~
freeflight
> What if our secret advantage is that not everyone is on it?

Facebook might have started out that way, but don't think that angle is gonna
help you build a new, even bigger, Facebook.

How long did Facebook actually stick to its "Only students" rule? Imho that
was more marketing than anything else.

~~~
gregknicholson
Scuttlebutt is designed deliberately so that you don't see the whole network,
only the bits you care about.

Partly this is view-filtering, but also your user agent only requests and
stores data if it _might_ be relevant to you, i.e. friends of friends and
replies to their messages.

It's not really one network, but a group of potentially-overlapping networks.

~~~
freeflight
> It's not really one network, but a group of potentially-overlapping
> networks.

That might hold true for the technical implementation, but it doesn't hold
true for the actual user experience.

If you are from the US and looking to befriend somebody from France, then you
don't have to join "Facebook.fr" to make that connection, it all works through
Facebook.com.

Not too long ago the same process looked pretty much like this:

"Are you on AIM?"

"No, are you on ICQ?"

"Nah, but I'm on MSM!"

"Sorry I'm not on MSM, but I'm on Linkedin!"

"No good for me I don't like Linkedin!"

"Guess I'm gonna make an MSM account -_-"

For that very same reason I ended up creating my first Facebook account on
Facebook.de, which was a scam site back then. But decades of
compartmentalization of users into "regions" convinced me that if I want to
get any use out of Facebook I better sign up with their German version of the
site, as most of my friends/family are German, that's how alien this whole
"aggregate all your social contacts in one place" idea was back then.

~~~
gregknicholson
> > It's not really one network, but a group of potentially-overlapping
> networks.

> That might hold true for the technical implementation, but it doesn't hold
> true for the actual user experience.

Talking about Scuttlebutt here, it's actually the other way round. It's one
protocol and any two people on it _could_ connect, but in practice you won't
see everyone, because you only have a social connection to certain people.

------
dfabulich
In my view of history, P2P takes off and does well when it's faster for end-
users than centralized solutions; P2P solutions fail when centralized
solutions are faster.

This holds regardless of the political advantages (or disadvantages) of P2P.

BitTorrent is often faster for large files than centralized solutions, so
people use it. (It's still inconvenient for a new user to kick off their first
torrent, but WebTorrent will help.)

Freenet is essentially never faster than alternatives, and so it has stumbled.

If some future P2P network can outperform HTTP, then it will succeed;
otherwise, it will fail.

Regardless, unless we see a breakthrough in battery technology, it will never
be the case that lots of people will carry around mobile phones in their
pockets whose radios are always on and participating in a mesh network. The
battery would die in an hour or two.

P2P networks work best when coupled with reliable power networks. But if you
have reliable power, you can probably set up a reliable wired network, too.

~~~
workthrowaway27
I think this is an accurate assessment of how things play out in the real
world. It seems like many people here are attracted to decentralized solutions
for ideological reasons, but don't realize/acknowledge that the vast majority
of people don't care and will go with whatever is convenient for them.

------
gregknicholson
There's a lot of white on this map: [World Population
Density]([http://www.luminocity3d.org/WorldPopDen/](http://www.luminocity3d.org/WorldPopDen/))
— the hard part is gonna be getting data between Europe and New Zealand
without Big Wire.

I love how Staltz's solution to this goes hand-in-hand with re-personalising
our interactions. In short:

1\. _The Next Billion_ haven't yet become used to the idea that useful tech
services must be global, and provided for you by a single corporation.

2\. They want to communicate with other people they know, physically nearby-
ish.

3\. Sneakernet is never slower than talking in person.

4\. Isolated mesh networks and comms with days of latency are viable for these
new net users.

5\. Get enough people using a mesh and the networks will start to connect.

6\. Where the internet is already pervasive, privacy and autonomy advocates
are resisting corporate control, and choosing decentralised alternatives.

7\. For now, we can exploit the existing internet to handle long-distance.

8\. But eventually, enough people will have the bottom-up mindset, and the
weak links in the mesh will become worrying.

9\. So a solution will emerge to fill in the gaps in the mesh, using ships /
beacons / balloons / satellites / modulated whalesong.

~~~
WJW
Building a connection across a continent just by hopping from mesh node to
mesh node is like trying to drive across a country by just using dirt paths.
It is clearly not impossible, but there is a reason that most people choose to
use highways for long distances. It is similar with a several-hundred-hop long
path through a mesh network vs a single hop over a suboceanic fiber
connection.

In particular, step 5 is untrue over large distances such as across the
Sahara. Step 6 is true, but does not necessarily lead to mass adoption. Step 7
is feasible, but costs a LOT of money at scale. Also, it seems that economies
of scale would rapidly push out all inefficient means of communication in step
9 (assuming that sailing ships across the oceans and/or launching satellites
will continue to cost money) and we would end up back with suboceanic fibres.

Finally, while the "next billion" may not be attached to their unlimited
netflix just yet, it seems highly unlikely that people in developed nations
are willing to accept drastic slowdowns in their connections. Streaming video
takes twice the bandwidth (both downloading and reuploading) as required for
just watching on EVERY mesh node between the viewer and the producer. Caching
might reduce this a bit, but is by definition not effective for "long tail"
content. I'd be curious to see how long the meshes hold together when it turns
out you can't see Game of Thrones because the next node over has no more
bandwidth to spare.

~~~
staltz
OP here. I find more than 4 hops impracticable. That's not what I was
suggesting in the article. I suggested (probably not clearly enough) three
ways meshes can be efficient: (1) local-first social networking where the use
case is "download latest updates and read later" much more than it would be
like instant messaging, (2) gossip, the eventual propagation of data over long
distances, (3) satellite-based meshes to cover large distance hops.

------
tannhaeuser
I'm all for p2p (really), but how about taking back control (sorry) of the Web
by actually developing and enforcing declarative/markup technologies and
standards instead of praising JavaScript because "it's not half bad" and
adding procedural features to the Web (APIs, WASM)? With the current state of
affairs wrt privacy and self-proclaimed standardization bodies, I'm not sure
the Web is worth preserving.

~~~
ashark
HTML with great default styles and better built-in form and interactive
elements (date picker, sortable tables) would be ideal. Screw CSS, it slows
everything down, just have good defaults and don't let pages mess with them
much. If we must have CSS at least jettison the complex parts with animation
and such. Make it very, very simple and weak. No Javascript. Can't be trusted
if it has the ability to initiate connections or modify form data, isn't
useful enough without that to justify the added bloat to the client.

God, it'd feel _lightning_ fast compared to what we have now. _And_ it
wouldn't be able to spy on you. A document-centric (once again) web that's
somewhat less crippled than Gopher would be amazing, but we absolutely _must_
cut out things like scripting in order to make the client trustworthy (and
fast).

~~~
unit91
> it'd feel lightning fast compared to what we have now

I disagree with this completely. So much of what JS does is _increase_ the
(perceived) speed and sanity of the user experience. For example, in a
scriptless world, your HN up/down vote can't be done without a full page load,
which would come with the added headache of changing the state of the world on
your page because stories and comments have changed their ordering. Solution?
Return of the chronological thread (no thanks).

And there are a _lot_ of other applications in this same boat. I don't know if
you are old enough to remember webmail when the refresh button was how you
checked for new mail. I'd greatly prefer what we have now, where mail just
appears over a socket. Online drawing/CAD apps or games without CSS and JS?
Forget it. Or go back to Flash.

Bottom line, we have the ecosystem we do today because the users demand it.
For us as devs, it could be better, more consistent, etc. but we need the
capability.

~~~
ashark
> I disagree with this completely. So much of what JS does is increase the
> (perceived) speed and sanity of the user experience. For example, in a
> scriptless world, your HN up/down vote can't be done without a full page
> load, which would come with the added headache of changing the state of the
> world on your page because stories and comments have changed their ordering.
> Solution? Return of the chronological thread (no thanks).

Counterpoint: when JS/Ajax-free versions of sites are available, they're
usually faster in practice. Gmail, Google Calendar. Tried them in their low-
to no-AJAX versions lately? I use basic HTML gmail because I got friggin' sick
of how slow AJAXy gmail and Inbox were on my stupid-fast MacBook. For the
specific case of upvotes on HN, IIRC this would be solvable with the
appropriate 2xx response (I forget which), signaling success but not
triggering a page load, without requiring any modification to current web
clients. Obviously this hypothetical user-focused web successor would be well
served by more flexible linking options (more method support, mostly) and
built-in form capabilities—which happen to be things the current web really
ought to have too, for that matter—but I don't think HTTP itself would need to
change.

IMO email and chat with notifications and such belong in their own programs,
if you want stuff like live notifications instead of a static page. I'd rather
bloat and code that can communicate with the outside world when I haven't
_specifically_ told it to _not_ be included with my document browser, because
it makes every operation on said browser slower, (way) less safe, and less
predictable.

Or hey, how about RSS/Atom for new mail notifications?

> I don't know if you are old enough to remember webmail when the refresh
> button was how you checked for new mail.

I was on the web well before Gmail existed, so yes, I remember the bad old
days of webmail before AJAX. Most of the improvements we've seen in it would
be served about as well by smarter, better client-provided form elements, and
some other, smaller modifications to the client side, save update pushes. I
don't think what we've gained from mixing traditional web stuff with the huge
set of things a page might now do to/for you with Javascript and CSS has been
even close worth the cost in trust, security, privacy, and predicability.
Quarantine that stuff somewhere else, plzkthx.

~~~
NoGravitas
I think you're quite right. Browsers would need to add quite a few
capabilities in order to support the world you envision, but those
capabilities would be simpler and less generic than javascript support.

~~~
ashark
HTML built-in elements—especially forms and tables—have atrophied badly, I
guess because the Javascript crutch is right there. That's gotta be why things
like the file picker are still awful, there's nothing like native UI date/time
pickers that any sane native UI kit includes, tables don't have (re)sorting
built in, and so on.

So the work would be, 1) delete like 80-85% of Gecko or Servo or whatever, 2)
add back about 5% of that to patch in better form elements and such, 3) write
some default styles that don't suck and a system for editing them or loading
themes, maybe even per-site (nice, but not needed in the MVP), including user-
submitted styles, and 4) overcoming the chicken/egg problem of getting content
on it to attract users, or vice/versa (there's the tricky bit).

~~~
anigbrowl
For 4), what if you started with clean versions of Wikipedia and things that
could be built from public datasets? That would be enough content to provide a
meaningful comparison experience, and if you had good content creation tools
the smart set could migrate over to HyperNet or whatever this brave new world
is called, originate content there, and dump it to the web as a secondary
option, recruiting people through the blogosphere.

I would absolutely be up for running something like this in parallel to my
existing browsers. Right now I have Chrome and Firefox going, migrating things
slowly over to Brave, and Tor about half the time. Plus desktop RSS readers
and other stuff. I'd love to have some clean virtual space to work in. One
more open window wouldn't be a problem.

------
tobbyb
The first choke point is the ISP. You are completely dependent on the ISP or
mobile provider to get on the network. Once on the network there are multiple
choke points around IP addresses, dns, ca authorities, registries and more.

As long as you are dependent on anyone to get on the network it by definition
can't be decentralized. Consumer wireless tech is heavily regulated and
governments are extremely paranoid about communication channels they don't
control and can't monitor.

This is unlikely to change because there are no incentives to develop
technology that truly empowers individuals, there is no profit in it, it's a
social good. If developed it will be demonized and made illegal, and limited
to minority dissenters, the general population is unlikely to go through hoops
to get on a network.

~~~
ytjohn
He takes a while to get there, and doesn't quite drive the point home, but
this is actually what André Staltz is proposing. Mesh-first devices. If we
start at the bottom of his article, here's basically the plan:

1\. Work on polishing IPFS, CJDNS, SSB[1], Beaker Bowser and some others into
a "mesh-first" setup. 2\. Sell, subsidize, or outright gift mobile devices to
offline users in Africa (I'm going to assume these are android smartphones)
setup to automatically join a mesh network, with these apps installed. 3\.
Users will transport these phones around, adding and creating content. Their
phones will automatically sync content with other devices as they come in
range. In general, only content that the user is interested in (that they
follow) will be synchronized, though some may act as hubs syncing all manner
of content for later redistribution. 4\. As usage grows, there will be various
regional and national mesh networks that build up.

I think a good comparison is to Cuba's weekly sneakernet[2] distribution, just
with wireless mesh.

[1]: Secure Scuttlebut is one of the author's creations:
[https://www.scuttlebutt.nz/](https://www.scuttlebutt.nz/) [2]:
[https://www.wired.com/2017/07/inside-cubas-diy-internet-
revo...](https://www.wired.com/2017/07/inside-cubas-diy-internet-revolution/)

------
niftich
Content-addressed overlay networks work pretty well over the Internet, and
they could work equally well on mesh networks (as the article posits), if not
two rather awkward problems: storage at rest, and liveliness of a storage
node.

These two factors even interact to make the situation worse. Because at any
point, nodes can drop off and you never know if this condition is temporary or
permanent, a distributed datastore has to redundantly store everything. This
needs much more space than just storing everything at its origin, as is done
in location-addressed networks like Web.

These are not insurmountable problems, of course; it's just that right now,
conditions like storage on nodes, link asymmetry, traffic distribution
asymmetry still favor centralization.

And fundamentally, the operators of individual nodes (i.e. ordinary people)
are often not very selfless, and not enough of them contribute to the health
of the mesh even if they discretionarily choose the mesh: seeding ratios on
BitTorrent networks that are not user-adjustable (e.g. old World of Warcraft
downloader, Facebook patchsets) are much, much higher than seed ratios where
the peer can bow out at any time, despite all of latter users choosing
BitTorrent voluntarily.

~~~
gregknicholson
> a distributed datastore has to redundantly store everything

Scuttlebutt gets around this: you only store and rebroadcast your friends'
stuff (and anything else relevant to you).

You're actually seeing content from your local storage — which is why
Scuttlebutt works seamlessly offline — so there's no need for altruism.

~~~
malteof
Scuttlebutt?

~~~
niftich
"a decent(ralised) secure gossip platform" [1], also referenced in the
article.

[1] [https://www.scuttlebutt.nz/](https://www.scuttlebutt.nz/)

------
feelin_googley
After reading the FCC Chairman's idea of "the Internet" in his Notice of
Proposed Rulemaking (below), I think maybe a better plan would be to rescue
the internet from the web. Every example he references of internet usage is
_web usage_ or email. He does mention "DNS and caching" but in the context of
the potential effect of _removing these_ from the services available to users.
(para. 37)

The general tone of the NPRM seems to be that an ISP can and will block or
throttle any non-web or non-email traffic. That could include any peer-to-peer
innovations that seek to restore the original functionality of the internet,
such as those mentioned by the author.

On the contrary, the dissent by Commissioner Clyburn specifically mentions
Skype as an example of internet usage. He believes the traditional notion of
"permission-less innovation" is under threat from the Chairman's proposed
approach.

[https://apps.fcc.gov/edocs_public/attachmatch/FCC-17-60A1.pd...](https://apps.fcc.gov/edocs_public/attachmatch/FCC-17-60A1.pdf)

The author highlights the importance of distinguishing "the web" from the
internet. Perhaps nothing is more important. The internet has more value than
the web. The web is severely limited in functionality. The internet, still
underexploited in its potential, does not suffer from the same limitations.

------
quickben
Funny.

Before DRM, spam, i-have-to-monetize daily 5 minute of bloging into a full
time pay, unskipable 30s YouTube ads, profiling every possible user behavior
and other shady shit like that:

We wondered how can we _improve_ the web.

Now, we wonder how to _save_ it.

~~~
soared
Spam, scams, and bad ads have been around since the start, what internet were
you using?

~~~
mr_spothawk
Spam, scams and badvertizements have been around seemingly since long before
the internet.

------
coding123
My wife gets increasingly pissed off when she searches for various materials
she wants to purchase. A specific site keeps coming up for her that she
absolutely hates and starts with an E. And there's nothing special about the
links, and likely nothing important about the specific products other than the
domain name that starts with E has billions of cross-links all over the web.
It makes the random long-tail search terms (that mind you are so freaking
obscure) all go to this same site. It's driving her nuts.

At this point SEO has been totally gamed and search is totally useless now. We
NEED an alternative.

~~~
workthrowaway27
I've had a few experiences recently where I was searching for a webpage that I
knew existed (because I'd been to it previously), but that was fairly niche
and couldn't find it through Google even after trying several different sets
of search terms and even fragments of text I remembered from the site.

------
pfraze
I worked on Secure Scuttlebutt and then founded Beaker. Happy to answer
questions.

~~~
codeisawesome
1\. Is there still going to be room for entrepreneurship on this new Web?

2\. To what extent will the lack of ability to monopolise, hurt the ability to
create large amounts of revenues and profits?

3\. Does this community also know about projects like OpenMined which seek to
democratise access to data & infra to run ML algorithms on? Any pattern like
reason you can think of, for it to have not come up in André’s post?

~~~
pfraze
2\. See my reply to 1!

3\. I'm not personally familiar with OpenMined but decentralization is a
pretty active space. SSB and Dat/Beaker have a few things in common: overlap
in the communities, a focus on p2p hypermedia protocols, and not a lot of love
for cryptocurrency solutions. That last point may change over time, but
currently the usability, performance, and waste of the coins has left us all
saying, "can't we do this without them?" And we think the answer is yes.

(That's said, I respect other people trying the coins. Everybody has to place
their bet somewhere.)

~~~
gregknicholson
Any system with a fungible unit of account leads to plutocracy. _Discuss_ :)

------
robotbikes
What makes people use centralized services is their utility. People will pick
up new tools and repurpose them if they find them useful. In theory the web is
supposed to be about communication and connection.

I think the emphasis on building a local network is a good idea. P2P mesh is
cool but until it provides opportunities that don't exist otherwise it is
unlikely to surpass.

The notion of building this for places without internet access is a positive
angle but also tricky. Charity is seldom as successful or scalable as user
driven initiatives. A lot of mobile phones now exist in places w/o "internet"
per se. But from my understanding say converting a 3 year old smartphone into
a mesh 1st device seems challenging from a wifi driver and power consumption
and app perspective.

Balancing the project of design that is easy for non-technical people with the
notion of eating your own dogfood one can theorize about building alternatives
to the hierarchical Internet. This is the challenge though, figuring out how
to build utility that is superior to the walled gardens and is in the hands of
the users to control.

~~~
gregknicholson
> The notion of building this for places without internet access is a positive
> angle but also tricky. Charity is seldom as successful or scalable as user
> driven initiatives.

Scuttlebutt's founder lives on a boat. There's no need for charity. We're all
“us”.

> converting a 3 year old smartphone into a mesh 1st device seems challenging
> from a wifi driver and power consumption and app perspective.

Yeah, actually engineering the bloody thing is always the tricky bit.

> This is the challenge though, figuring out how to build utility that is
> superior to the walled gardens and is in the hands of the users to control.

Yeah, actually designing the bloody thing is always the tricky bit.

------
EGreg
I like the phrase "local-first" software.

It's been happening for a while:

[https://qbix.com/blog/index.php/2017/12/power-to-the-
people/](https://qbix.com/blog/index.php/2017/12/power-to-the-people/)

------
bane
For us "old" people who remember the internet before the web -- one of the
things that's really different about the modern internet is the very limited
set of protocols and applications that the average user interacts with. It
really used to be that every different service type mapped to a different
protocol and HTTP (and HTTPS) has just sort of subsumed everything. Back in
the old days to minimally use the internet you'd have to know at least telnet,
ftp, nntp, gopher (maybe), smtp, pop and maybe a handful of others.

(okay, maybe modern users use more protocols than I'm admitting to, but it's
very obscured these days but so many different applications just ride on
HTTP(S) anyways).

There's really nothing preventing some motivated group to just spin up an
entirely new kind of service that "fixes" all that's wrong with the web,
custom protocol and application stack.

"But the network effect!"

And that's something us old timers remember, we remember lots of great
services spinning up and down and even when the web was just a handful of
sites. The web earned its network by having better general utility than other
things that were attempted, but why can't a new better service eventually earn
it?

(In the meanwhile us hacker types will enjoy having a cool new playground to
muck around on for a few years).

~~~
acdha
> There's really nothing preventing some motivated group to just spin up an
> entirely new kind of service that "fixes" all that's wrong with the web,
> custom protocol and application stack.

Firewalls and security policy. The internet is a much less trusting place
these days[1] and anything too new will have adoption issues once it gets out
of a core hacker group.

1\. Remember IRIX shopping with telnet and a guest/guest login enabled by
default to “foster the spirit of collaboration”?

------
whimful
scuttlebutt is my main social media these days... oh and in case you're
looking for git that's not wedded to github via your comments and issues,
scuttlebutt plays really well with git - you push comments and code into your
gossip-cloud together

~~~
pavel_lishin
What is it?

~~~
staltz
If you asked about the GitHub alternative, it's this:
[https://git.scuttlebot.io/%25RPKzL382v2fAia5HuDNHD5kkFdlP7bG...](https://git.scuttlebot.io/%25RPKzL382v2fAia5HuDNHD5kkFdlP7bGvXQApSXqOBwc%3D.sha256)

~~~
mnzaki
The bit about being permissionless is kind of unclear.

"This seems to work well: the SSB network thrives off of being a group of
kind, respectful folks who don't push to each other's master branch. :)"

Will this hold if the community expands to 10x or 100x the size? Surely
conflicts of interest will arise, or just plain assholery/trolling.

------
gregknicholson
> Smartphone manufacturers sell mesh-first mobile devices for the developing
> world

I hope puri.sm is listening. This is what I want my Librem 5 to be!

~~~
nicole_f
Hi, yes, we are listening :)

First of all the Librem5 is as open as possible, i.e. if there is a way to
support this or another kind of mesh networking then you will be able to add
this function to the Librem5 - as long as the software needed for this is free
and works on Linux on ARM (ARM64 to be more precise). There will be no
restrictions from our side, none at all.

But I also have to point out that we are bound to what we find on the market,
concerning hardware and drivers. The latter can be modified to support what is
needed for meshes but hardware usually is hard to change. We are currently
looking for as good as possible free software supported WiFi and BT chips that
do not require runtime firmware and are power saving enough for a mobile
device. This turns out not to be very easy. Once we settle on some type we
will check thoroughly for mesh requirements.

So yes, your voice(s) are heard! And we will do our best to support mesh
networking, in soft- and hardware on the Librem5.

Cheers nicole

------
mwcampbell
Unless I missed something, this article lacks a call to action (or multiple
calls to action for different people). What can individual readers of this
article do to help realize this plan? For example, if I have money, where can
I give it?

------
VectorLock
If he figures out how to make a good mesh network and bootstrap it then
everything else is easy. That blogpost was pretty much a long winded way of
saying "we don't have a mesh network that works."

------
dec0dedab0de
Does anyone here have any experience with MANETs? I really like the idea, but
is there any way to defend against malicious actors? Are there any protocols
that are clear winners? Is the reason it hasn't caught on solely because the
big ISPs are against the idea?

~~~
elihu
In general, if you allow anyone to be a router then anyone can advertise a
low-cost route to anywhere, and then drop the packets. You might be able to
combat that with some kind of decentralizes reputation system or policies that
prefer existing routes that have been in use for awhile and are known to work,
or that sends a few test packets over any new "suspicious" link before using
it send real data.

I don't know what current research says about this problem, or whether any
current routing protocols attempt to deal with it at all.

In practice, I expect that real wireless meshes can still be somewhat secure
by operating as managed clusters of routers that don't peer with other
clusters of routers without the administrator explicitly enabling peering. So
for instance, you might have one neighborhood with a dozen houses with routers
managed by some guy, and a nearby HOA with another cluster of routers managed
by someone else, and they might find out about each others' networks and agree
to create a bigger network by peering. So, your network's trust model can
piggyback on the trust model of the real world -- if you know someone in real
life, it's a lot less likely they're going to try to take down your network
than some random person you don't know.

------
niftich
The IPv4 perspective is a red herring. NATting was indeed necessitated by IP
address scarcity, but a domestic installation that does NAT comes with
ancillary benefits, like giving you, the home user, a single place to control
access to your network.

In IPv6 it's nice that you have an address space that's not only big enough to
accommodate every device, but large enough to even burn through addresses and
treat them as disposable, but once IPv6 becomes widespread there will need to
be some rethinking as to how to manage firewall rules between your own
devices, how to segregate your portion of the network from the spurious (and
sometimes malicious) traffic of everywhere else.

~~~
throwaway2048
firewalling on a router/gateway gives you the same ability, and none of the
downsides of NAT.

Nat is in no way a requirement for this.

~~~
WorldMaker
Not to mention that you really need to focus on securing the endpoint anyway
because an increasing number of your devices are mobile and aren't "home" an
increasing amount of time anyway. The period of central router/gateway/NAT
management made the most sense was when the majority of your home were
desktops fixed in their locations in the same rooms. In a world of a majority
of laptops and hand computers, worrying about the strength of your home
router/gateway/NAT is increasingly silly, as those devices may spend as much
time or more at your work, or the coffee shop down the street, as they do at
"home".

------
genki_teacher
People were sounding the same alarm in the '90s with AOL. History has shown us
that, at some point, a leaner, more innovative company will surpass Facebook.

~~~
gregknicholson
Who says it'll be a _company_?

~~~
jlebrech
less design by commitee

------
austincheney
If you really want to rescue the web find a way to restrict adware, spyware,
and the like. I upgraded internet at my house to 1gbps (about 890mbps down and
920mbps up) and I hardly notice any speed difference surfing the web. Sad.
Everything else is fast as hell though.

~~~
bo1024
The good news is we have tools (browser plugins) to help us take control of
our own machines while web browsing. The bad news is that sites are actively
hostile to these tools. I think it's as much an incentives problem as a
technological problem.

------
zxy_xyz
Now we just need to save the web from slow JS apps. Is there something doing
that right now by chance?

~~~
tomcooks
You, by clicking on the "reader mode" of your browser and telling the owner of
the site about it

------
zeep
I thought that was what the web archive and similar projects were doing...

------
shak77
If I wanted to rescue something from the Internet, it would not be the web

~~~
dang
Would you kindly stop posting unsubstantive comments to HN?

