
Time to rebuild the web? - BerislavLopac
https://www.oreilly.com/ideas/its-time-to-rebuild-the-web
======
randomsearch
I don't believe the problem lies in any technology. We don't need blockchain,
it's not JavaScript minification, it isn't the centralisation of services that
are the problem.

It's the business model.

No amount of tech is going to change that. If you want to change the
situation, start social media businesses with a different business model.

~~~
TeMPOraL
It's not just social media, and it's not just ads, about which other people
responded to you.

The root of the problem - or at least the trunk very close to the root - is
Software as a Service. It's the trend of turning software into services. Sure,
it's nice for the vendors, and it's nice for corporate clients who can write
off responsibility to a third party. But it also makes you no longer in
control of your data, and the code that runs on it. The availability of your
work becomes completely dependent on business decisions of third parties,
which can - and frequently do - disappear suddenly. It's what leads to
proliferation of ads, surveillance and growth strategies like dumbing down
software to the point of almost complete uselessness.

If technology is meant to be - or at least is capable of being - empowering to
individuals, then turning everything into a service is an exact opposite of
that.

~~~
jasode
_> It's the trend of turning software into services. [...] But it also makes
you no longer in control of your data, and the code that runs on it. The
availability of your work becomes completely dependent on business decisions
of third parties, which can - and frequently do - disappear suddenly._

Here's a thought experiment I'd like people to ponder: Suppose we had this
decentralized hardware utopia where every homeowner had his own web server
node ... such as a "FreedomBox"[1][2] or a hypothetical ISP smart router with
builtin 16GB RAM and 10 terabytes of disk to hold Sandstorm[3] or whatever
self-hosting app stack you can think of.

Even with those favorable conditions, I'd argue we'd _still evolve towards
Youtube centralization_. I'd encourage people to think about _why that
counterintuitive result eventually happens._ (As a hint, Youtube does things
that localized technology installed in the home doesn't solve and _can 't
solve_.)

 _> If technology is meant to be [...] empowering to individuals, then turning
everything into a service is an exact opposite of that._

To further explore if eventual mass migration from home self-hosted videos to
centralized Youtube is inevitable, we have to be open-minded to the idea that
thousands of people will conclude that _" putting my video on Youtube instead
of hosting it on my own server is what empowers me."_ Why would people think
that opposite thought?

[1] [https://freedombox.org/](https://freedombox.org/) [2]
[https://www.nytimes.com/2011/02/16/nyregion/16about.html](https://www.nytimes.com/2011/02/16/nyregion/16about.html)

[3] [https://sandstorm.io](https://sandstorm.io)

~~~
mac01021
> Youtube does things that localized technology installed in the home doesn't
> solve and can't solve.

Are you talking about search and content discovery? Are you talking about
replicating the most popular videos over many servers so that the system can
efficiently serve all the demand to watch those videos?

Maybe existing, fully-distributed products don't solve these problems. But it
isn't obvious to me that they _can 't_.

~~~
matte_black
He is talking about monetization.

It’s not even that counterintuitive with some capitalist forethought.

~~~
TACIXAT
Monetization is easily possible, we just need a reliable way to transfer small
amounts of money electronically at a low fee. Right now the large credit card
players have a stranglehold on digital transactions, we just need something
new that makes it not awful to pay someone a few cents online.

~~~
kristianc
Consider that the people watching the videos have repeatedly shown in no
uncertain terms (even when given the option and it is made really really easy
for them), that they don't want to pay.

~~~
TACIXAT
Really? I absolutely will not pay 8$ a month to read the occasional NYT
article. I will not pay 4$ per episode of TV shows I watch (I have tried this
multiple times and it sucks). I would absolutely pay 5 to 25 cents to read an
article. I would absolutely pay 1$ per episode. Unfortunately, no one is
offering their content at these realistic price points. Nor could they with
credit card processors taking 30 cents per transaction. We need a new model
that makes costs realistic. I bet a lot of people would pay 1 dollar a month
for social media to avoid ads and tracking.

~~~
twblalock
Facebook had revenue of about $40 billion in 2017. Facebook has billions of
users, so if every user gave Facebook a dollar a month, that could add up to
about as much as they made last year. (I don't think every user would do that,
especially the users who live in developing countries and can't afford to
spend a dollar, but let's use the best case for the sake of argument.)

What happens when a startup decides to take on Facebook by offering a social
network that focuses on protecting user privacy? We would want that startup to
succeed, but how could it ever get there by charging a dollar a month? It
would start with a small number of users, so the total dollars per month would
be small. If Facebook was only charging a dollar, this startup couldn't charge
more than that -- a social network that has fewer of your friends on it than
Facebook and costs more to use is not a very attractive.

In other words, I think the subscription model would make it difficult for
social media startups to succeed.

~~~
dorgo
I think the lock in effect of social networks is orthohonal to monetarisation
model. It is difficult for social media startups to succeed regardless of how
they (or facebook) make money.

------
Hedja
It feels like this is already a work in progress using Beaker Browser and Dat.

\- Beaker lets you browse entire websites (dat archives) and fork them.

\- It lets you create and serve your own sites directly from the browser and
seed it from a server (like a torrent).

\- It lets other sites create templated sites under your name for user
generated content.

\- Visitors by default temporarily seed your website which may reduce single
point of failures, hug of deaths and costs.

With this peer-to-peer torrent-like approach, the web can become distributed
again and feel more like a "web". There's still a lot of work left and maybe
Beaker itself isn't the best implementation for this idea, but it's a good
start.

[https://beakerbrowser.com/](https://beakerbrowser.com/)

[https://datproject.org/](https://datproject.org/)

~~~
cdata
It seemed to me that the OP was making a distinctly different suggestion: the
real challenge is offering a better UX.

Emerging distributed tech won't fix a UX problem just because it happens to be
technologically sophisticated (he calls out TOR, but I think he is making a
general comment here).

Instead, he asks, why not spend some effort giving older tech like RSS a
better UX?

I'm inclined to agree, but on the other hand it seems like the marketplace of
ideas speaks for itself and we should be keeping our eyes on the future.

~~~
Hedja
I would argue that Beaker _is_ providing a better UX for the web while solving
the centralised nature of it. Its API allows websites to provide interfaces
for creating and modifying websites tailored for specific audiences, owned by
the user.

So you can have your own RSS subscriptions in a Dat, a feed reader in another
Dat, click a button on a website to subscribe to it and add it to your Dat.
The Feed reader can keep track of what you've read and store it in its own Dat
or a different Dat (if you want client/data separation). Your mobile phone can
sync to your Dat(s) so you have Desktop/Mobile sync all in a single place.

I've not tried this, but I don't see why it wouldn't work.

~~~
kingnight
How does a mobile phone sync dats? The main drawback I see currently with
beaker is their is no mobile version (for iPhone)

~~~
api
Mobile as a whole is an unsolved problem for decentralized systems. The mobile
revolution is and has been by far the most powerful driver of centralization
in the last 10-15 years.

Mobile devices are slower, have less memory, and must consume less power than
desktop, laptop, or server devices. To achieve good battery life they really
need to be in an almost-off state most of the time. Add to this the fact that
cellular data plans limit bandwidth and cellular networks are a lot slower
than most land-line networks and you also have to be very efficient with the
use of bandwidth.

This means that decentralized systems that rely on peer to peer participatory
propagation of data or distributed compute just don't work well on mobile.
Anything with P2P data propagation will use too much data plan and run the
radio too much, shortening battery life, while anything with distributed
compute will destroy battery life and turn your phone into a pocket hand
warmer.

Mobile devices really are thin clients. I call them "dumb terminals for the
cloud." Since the cloud is mainframe 2.0, mobile devices are the "glass TTY"
(e.g. VT100) 2.0.

The best solution is probably not to fight the nature of mobile devices as
thin clients but to tether them to stationary devices. But _which_ stationary
devices? Laptops are themselves mobile and are off half the time, and most
people (myself included) no longer own desktops. I have a personal server but
I'm a geek and a huge minority. Most people just do not own an always-on
device.

Farming this out to random always-on devices is a security nightmare or at
best is no better than the vertically integrated silo-ed cloud.

I see only three solutions:

(1) Create a niche for a personal always-on server type device and
successfully market one to the end user. It would have to be open enough to
allow the server side of 'apps' to be installed. Many have tried to do this
but nothing has caught on.

(2) Create a mobile device that's designed to be a "real computer." With 5G
coming the bandwidth for this might be on the way, but you'd also have to
contend with battery life and heat dissipation. One avenue would be to split
the CPU in two: a high-power burstable CPU and a low-power slow always-on CPU.
Require the always-on parts of decentralized services to run there and as a
result to be very optimized. The problem is that a mass-market mobile device
is a huge undertaking. Another route might be to sell a snap-on case that
carries an extra battery and also includes a mini-server CPU, RAM, storage,
etc. This would make your phone a bit bulkier but if there are benefits /
killer apps it could catch on.

(3) Solve the security problems inherent in appointing random stationary nodes
to serve random mobile devices. This would probably involve a major innovation
like fast scalable fully homomorphic encrypted virtual machines or really
tough security enclave processors.

~~~
stev0lution
I really enjoy your thinking.. Do you have some kind of blog where I can
follow you? :D As you already mentioned, contributing whatever resources you
consume is relatively unreasonable on mobile devices, because it would pretty
much double data and battery usage. So while there is most likely some kind of
overhead connected to the third solution you suggested, I still think it is
probably the easiest one because it doesn't require any new specialised
hardware.

Maybe regulation can solve some of the problems with the current systems, but
the idealist in me really wants to see provably transparent (open source) and
secure solutions which don't require trust in the hardware so we can still
make use of modern, efficient (federated) server farms without having to
giving up control over our data.

~~~
api
My seldom-updated personal site is
[http://adamierymenko.com/](http://adamierymenko.com/)

It's actually worse than doubling. The nature of distributed systems means
that participating in resources consumed normally triples resource consumption
_at least_. I'm not aware of any approach to decentralization of services like
Facebook, Twitter, etc. that would merely double it.

Your typical desktop or laptop has a lot of resources to spare. Your typical
mobile device has none. Mobile promotes a client/server mainframe/dumb-term
architecture for fundamental technical reasons.

------
wsy
It is somewhat ironic to see such an article coming from the VP of a large
publisher. How can he seriously suggest to use Feedly as tool for finding
relevant and high-quality content? Where one of a publisher's main services is
to separate the wheat from the chaff, and to guide the reader to content
useful for them.

The whole issue I see with these articles is that they view the Web mainly
from the content producer side. However, the majority of Web use is consuming
content. And also the problem of the few walled gardens is on the consuming
side. Everybody can nowadays put content on the Web with ease outside of these
gardens. It is just hard to get attention for that content.

So if you really want to make the Web more distributed again, you need to come
up with models for distributed relevance assessment, spam filtering, quality
checking, etc. Work on a new 'Web infrastructure' totally misses the point.

(context: I have worked in P2P research for a while)

~~~
spiralpolitik
It's doubly ironic to see such an article from a VP of a large publisher that
recently shuttered its own digital bookstore that sold unencumbered PDF copies
and moved it's own content into walled gardens of concentrated content.

~~~
extra88
They still operate their subscription service, Safari Books Online. Other
digital sellers, like Amazon Kindle, can and do sell books without DRM, if the
publisher wishes it. The only thing lost seems to be per-book purchases in
PDF, instead of EPUB or Kindle's format. Doesn't seem like much of a loss, an
EPUB can be converted to PDF anyway.

------
martin_drapeau
When I ask friends and family who use Facebook daily, privacy and security
never come up as pain points. Media is blowing everything out of proportion.

Replacing Facebook with RSS or Feedly is nonsense. My friends and family have
no idea what those are. Facebook makes it easy for people to connect with old
friends and family members through active human interactions: like this,
follow her, read that. By doing so you do tell Facebook about you and others.
Its interactive and solves a pain point - keeping in touch and interacting
with others, your community. Feedly does not do that.

~~~
return1
it's also a fault of the websites themselves. In the off chance that an
average facebook user uses google to search and end up in a website outside
facebook, they 'll be typically be pestered with signing up to a newsletter
and then with a popup to get browser notifications (along with a cookie popup
in europe). The User experience will be usually atrocious, with content hidden
way behind the ads. At that point the user is not only lost in the UI, but has
lost all trust to the website itself and runs back to its trusted facebook.
Why would they ever want to subscribe to such monstrosities? It's a tragedy of
the common monetization "secrets" that marketers promised equally to everyone.

If webmasters and bloggers realized the benefit of a honest , dead simple user
experience, they might earn themselves a bookmark (which is currently the only
alternative to social media that average users can probably understand)

~~~
jsilence
yeah, wholeheartedly agreed.

and it is also sometimes the peoples fault when they forget civility and
humility in discussions and behave like a$$holes.

othertimes sane and healthy online communities fail to defend themselves
against those and against more sneaky trolls.

on top of that communication manners seem to degrade more and more with social
media usage. I am often appauled by the tight lipped one line responses to
contact messages on $craigslist-like-service. a friendly message asking about
the availability of an item on sale is met with "yes it is still available.".
no "hello", no end greeting, nothing along the lines of "if you'd like to buy,
give me a call at $number".

------
d--b
These articles are annoying. They completely miss the point.

The web as it was in the early 90s was an alternative to major content
delivery platforms (TV, press, mostly). So there was these massive systems
that were the press and the TV who would own most of our attention span. And
then there was this cute alternative technology with a great community that
was yet unpolluted by the big guys.

Today, the big content guys colonized the medium, so it no longer feels like
the web is "a cacophony of different sites and voices" to quote the article.
But in fact, we're in the same situation: big guys with money and loads of
content on one side and small guys with communities on the other side.

The web as it is today doesn't prevent you from spreading your ideas to the
world on your very own server... And websites like reddit and hacker news are
great amplifiers of small voices.

I would probably never have read this article if it had been published in the
90s. Since it's #1 on HN, perhaps 50k people have read it today! The way I use
the web feels very much like the 90s: a few aggregation sites, a lot of
excellent content written by independent guys, links between them...

Who cares about the centralized internet when the internet that we've loved
since the 90s is still there and thriving?

~~~
kristianc
Which works great, until ISPs start offering subsidized plans for Facebook,
Gmail, Spotify, Twitter, and expensive plans for everything else. When that
happens, there is very little economic incentive to have smaller websites,
smaller e-commerce websites end up being part of Amazon, and Google starts
providing more and more of its own content.

~~~
vlehto
EU is about to enforce GDPR. Net neutrality seems to be American problem,
while rest of the world is moving to the opposite direction.

Which economy has better prospects in the long run? Currently internet is very
U.S. centric because most of the big players are located there. That could
change if European legislation is more supportive for small agile companies to
evolve.

~~~
kodablah
> EU is about to enforce GDPR [...] if European legislation is more supportive
> for small agile companies

Surely you see the contradiction there.

> Currently internet is very U.S. centric because most of the big players are
> located there. That could change if [...]

That's not just a coincidence and I don't believe it's any kind of first-mover
advantage. It's about the environment in which you operate and passing more
and more laws and rules, regardless of the intentions, is the opposite
direction. One can lament the digital lawlessness, but we can't pretend it
didn't have value or that tailored laws could have a similar effect.

~~~
candu
Not necessarily a contradiction - GDPR could have many possible effects; it's
too early to claim that these effects will disproportionately harm small
companies.

One possibility, of course, is that larger companies will have more resources
to tackle GDPR compliance, and thus be better able to respond effectively than
small companies. However, it's also possible that, by taking privacy /
security seriously from day one and storing the minimal set of user data
needed to operate, a smaller company will have a distinct advantage over some
BigCo that must now migrate sprawling, inter-tangled distributed systems never
designed for GDPR compliance. After all, that small company now has a
compelling legal reason to avoid feature / data warehousing bloat and save
their limited engineering resources, whereas the larger company most certainly
has that bloat baked deeply into their stack due to years and years of "Big
Data" hype.

~~~
deltron3030
It's not strictness being the problem, it's the uncertainty. GDPR seems like a
framework to take out anybody at will, because nobody can really conform to
the laws due to that uncertainty. If you do anything really
disruptive/innovative, you're probably gonna get problems.

~~~
candu
I _really_ dislike this use of the term "disruptive / innovative" \- it's
almost a circular definition in this context, since it's effectively being
used to mean "anything new that has a high chance of running afoul of the
law". English has better words for this sort of thing: criminal, negligent,
etc.

Launching a new product / service does not give a company carte blanche to do
whatever they like, no matter what Uber et al. may prefer to believe. Suppose
I want to create a "gun-share" app, because, you know, sharing economy and
stuff. I should fully expect that, where laws exist that make this infeasible
(e.g. background checks, transfer of ownership laws, etc.), those laws will be
enforced. I should also fully expect that those laws might change.

If I've been paying attention to the broader world - something that Silicon
Valley isn't historically that great at - I should probably have seen this
coming for _years_ , since that's how long it takes to build support for
legislation. GDPR didn't come out of nowhere; data privacy, security, and
ownership concerns have been building for a while now.

In the case of the GDPR: my personal position is that it gives EU citizens
rights we should all have had from the start with respect to our data. Good on
them for passing it, and for giving it teeth. If that tramples on some service
that can't be bothered to respect those rights, I couldn't care less - and I
say this as a small business owner who's fully aware that, yes, someday I too
may face a GDPR compliance request. Maybe I'll be prepared, maybe I won't,
maybe it'll never happen; that's part of doing business.

------
nickjj
It's weird that the author mentions the 90s and how the web was not meant to
be walled gardens, but then decided to omit AOL from the story.

AOL was one big ass walled garden back in the 1990s. It was pretty popular
back then.

The funny thing is, back then, it was popular for 2 groups of people. First
were younger kids who wanted to stir up trouble on the network (AOHell, etc.).
The second were old people.

My uncle is ~70 and still has an AOL account because he thinks it's "the
internet". That's how strong AOL's walled garden / marketing was. Although, it
didn't help that back then you would have to dial an AOL phone number on your
modem to connect, so I could see how it could be confusing.

~~~
robin_reala
That’s a pretty US-centric take on things though. AOL didn’t exist in any
great form in the UK certainly, and the walled gardens that did (BBSes, CIX,
Compuserve etc) pretty much evaporated by the mid-90s.

~~~
isostatic
Microsoft got in on the walled gardens with MSN, and that wasn't until 95. I
think it's fair to say the first half of the 90s was walled gardens, although
by 1995 holes were opening up. A whole generation (or two) wanted to email
Terry Wogan and see the webcam, which started in 1996. By the time freeserve
launched in 1998 the walls had crumbled -- it was a very fast moving time in
the UK - over the course of 5 years we went from a few people on dial-up
compuserve to almost the whole country knowing what the internet was, even if
they didn't realize it's potential.

I'm not sure the internet is much different now than it was in 2013, or even
2008. I suppose the accessibility (always on phones), and the increase in
online media (video/audio streaming as the norm) is the biggest change from 08
to 13, but last 5 years?

------
marsrover
This article is more of a longing for the past than any concrete idea on how
the web should be rebuilt. Not only that, but it sounds like the ramblings of
someone who isn't merely longing for the past but stuck in it.

Is not being able to copy code from Facebook using view source killing the
web? Of course not. If you are just starting out there are more resources
available now than ever.

And I hope to God we never have a web built on block chain technology.

~~~
nlstitch
My thoughts exactly. It's constant complaining and referencing old tech.

~~~
jacquesm
That reference to old tech may be there because the old tech was sufficient
and the new tech has left a gap.

Right now there are two webs and they're interwoven to the point that it has
become hard to see, but there is the 'web of applications' and the 'web of
information'. There is no particular reason why the second should be
implemented using the tools for the first.

------
dkns
I'm not buying it. Web development complexity grows but so does desktop
software, kernel development, etc.

You used to be able to have a small program written in BASIC or whatever that
would give you a simple popup alert on desktop. Now you need to worry about
backwards compatibility, different OSes, etc. But guess what? You can still
write simple programs, without all the complex stuff, if you want to. The
exact same thing applies for web. You can bundle megabytes of javascript, for
some fancy stuff, or you can have simple static index.html with just text.

"The web was never supposed to be a few walled gardens of concentrated content
owned by Facebook, YouTube, Twitter, and a few other major publishers. It was
supposed to be a cacophony of different sites and voices."

It still is. Facebook, YT and Twitter are simply more known than your local
pet owners forum but that doesn't mean the pet owner forum doesn't exists.
It's similar to supermarket chains and your local grocery store. Both still
exist.

~~~
nicbou
It's still possible:
[http://motherfuckingwebsite.com/](http://motherfuckingwebsite.com/)

When I made All About Berlin, it was just HTML and CSS. Nothing complex there.
I eventually added Disqus and Google Analytics, but it's still possible to
build simple websites.

~~~
3pt14159
Fixing the internet is hard. There are so many things wrong with it. Take your
website for example:

1\. No encryption, meaning any intermediate router could have altered the
contents or at the very least surveilled them.

2\. Illegible line length without me boosting the font size or browser width.

3\. No protection for the reader that this is actually
motherfuckingwebsite.com and not some .com with a Cyrillic character instead.

4\. No protection for the reader that what is on this site hasn't been
fingerprinted somehow (zero-width characters, Cyrillic, etc).

5\. Even if the site was encrypted the DNS lookup would have exposed that I
was looking at it anyway.

6\. It's on the root of the domain which has a number of headaches at scale
and roots vs www in general complicate things like understanding certs, CORS,
etc.

I agree that we need something simpler and locked down and I've even thought
through not just the web portion, but the underlying portion as well, but it
requires a _massive_ change to get something that is both secure /
trustworthy, not prone to spam, useable by the non-technical, and resilient
against well funded mal-actors like warring nation states. And none of the
replacement makes the world better for most of the worlds well funded
corporations. Silicon Valley and Hollywood would both take a real hit because
their business models both rely on insecurity. Hollywood is reliant on
surveillance to enforce copyright and SV is reliant on surveillance to supply
targeted ads.

~~~
exolymph
doesn't address all of your points, but:
[https://bestmotherfucking.website/](https://bestmotherfucking.website/)

------
noir-york
The web reduces the cost of information transfer to pretty much zero so anyone
can publish, with no gatekeepers (of old).

Scale matters. Information inequality was inevitable.

Free trade led to growing concentrations of economic power and rising economic
inequality. We also have information inequality to go with it, and because
information transfer is much lower cost than trade, information inequality
will be much higher and power more concentrated (social media stocks should
thus be valued higher than companies which make real things)

Not sure the article understands that inequality is inherent in the structure
of the web.

If you wanted to reduce inequaliy there are two options: 1\. redistribution
2\. raising the 'transaction' costs

In economics we know what those look like:

1\. progressive taxation and a welfare state 2\. legislation to raise
environmental standards, increase minimum wages, etc.

What's the equivalent of a social welfare state and higher minimum wage when
it comes to information?

Another way of looking at this is in terms of externalities: free social media
generates both positive (connecting with old friends) and negative
externalities (fake news, election mischief, etc).

One way of internalising both would be for Facbook to charge users. That is
never going to happen. Carbon credits and the like make polluters internalise
the costs that society would otherwise incur. So, what kind of legislation
(and it will have to be legislation) would get social media companies to be
pay for the negative externalities?

------
jancsika
> The web was never supposed to be a few walled gardens of concentrated
> content owned by Facebook, _YouTube_ [...]

What are the numbers on views of original Youtube content vs. views of clips
of material that infringes copyright? My guess is that the latter still makes
up a significant portion of clicks.

If someone makes a serious P2P alternative to Youtube, not only will
infringing content be there-- such content's existence will be a measure of
how successful the platform is.

Either that, or you design something like the unexplained decentralized
internet on the show "Silicon Valley" which somehow still has a single company
controlling the pipes.

If not that, then "P2P-tube" has to house Silvestri's "Back to Future"
theme[1] just as it houses impassioned vlogs of walled-garden escapees. If you
don't have both you'll end up with a bunch of walled-garden escapees
rationalizing the virtues of housing unpopular, homogenized content.

Anyway-- as passive onlookers its easy to dismiss Youtube's infringing
content. But it's a lot more difficult when its a fledgling service that
content owners cannot monetize. I think we've been through this before with
Bittorrent. What was O'Reilly's opinion then?

[1] The first Youtube link that popped up was obviously put together in a
consumer video editor and currently has over 9 million views:

[https://www.youtube.com/watch?v=e8TZbze72Bc](https://www.youtube.com/watch?v=e8TZbze72Bc)

Edit: clarification

~~~
jhasse
"Peer-to-peer-YouTube" already exists btw:
[https://github.com/Chocobozzz/PeerTube](https://github.com/Chocobozzz/PeerTube)

------
matthewaveryusa
I attribute the issues the web is having today with one simple fact: We never
democratized home servers. If routers came out with home-server capabilities
that were standardized for data transfers we would not be here today. Simple
as that. Is that to say that adoption of home-servers is impossible because
users would have never adopted them? That's a non-technical issue I frankly
have no idea what the answer is. I'd like to think that if people of all walks
are capable of plugging in a wifi router and logging in/signing up to social
media websites, they would also be capable of managing their home server (as
long as the UI was nice enough.) but I have no way of knowing if that's true
or not.

~~~
icebraining
Most people I know never plugged in their wifi router, the ISP tech did that
for them.

------
chappi42
What I miss in the article is tax. Yes, "Much as we may complain about
Facebook, selecting relevant content from an ocean of random sites is an
important service". Same with google, they do a terrific job with providing
YouTube and the Search infrastructure.

But YouTube, Facebook, Google destroy (a.o.) the 'normal media' as they
oligopolize advertising. They hinder progress/competition as they cross-
subside their ventures with ad-money.

The simplest solution imho would be to tax advertising strongly. This could be
done independently by countries. E.g. every single advert in Germany would
bear a tax of 30 - 70 %. This gives a country the power (and money) to steer
in directions which benefit the country as a whole, i.e. support traditional
media, public broadcaster, culture.

Concentrated advertising money is dangerous. It should be regulated. The same
approach is being done with e.g. cigarettes and gas. Tax is here to contain an
unwanted action (smoking) or to build infrastructure (roads). Couldn't the
same approach be taken for the internet?

~~~
noir-york
I think you're on to something.

A thought which came to mind would be for a country to levy an access charge
(think of the BBC license fee) to access social networks based on usage. The
money would then be used to clean up the negative externalities caused by the
social network in that country.

------
jaekwon
> While many of the technologies I'd use already exist, rebuilding the web
> around blockchains and onion routing would require a revolution in user
> interface design to have a chance; otherwise it will be a playground for the
> technology elite

We've been working on proof-of-stake and blockchain scaling/interoperability
infrastructure since 2014. It all starts with a classical BFT algorithm which
provides simple light-client proofs and fork-accountability. Your mobile phone
can verify transaction finality in 5 seconds, with no need for an hour of
block confirmations as in Bitcoin proof-of-work mining.

    
    
      https://github.com/tendermint/tendermint <-- blockchain consensus engine
      https://github.com/cosmos/cosmos-sdk     <-- blockchain framework in Go
      https://github.com/keppel/lotion         <-- blockchain framework in JS
      https://github.com/cosmos/voyager        <-- blockchain explorer
      https://github.com/tendermint/go-amino   <-- blockchain codec derived from Proto3
      https://cosmos.network
      https://tendermint.com
    

The Cosmos Hub will launch very soon. Lets build stuff! Onion routing for
Tendermint would be a great addition. And DNS/name-resolution on Tendermint
can actually solve Zooko's triangle.

While we're at it, lets fork Go too.
[https://groups.google.com/forum/#!msg/e-lang/cQiXS_GnKS4/Zsk...](https://groups.google.com/forum/#!msg/e-lang/cQiXS_GnKS4/ZskTifaoBgAJ)

Disclaimer: I'm a cofounder of the Cosmos project, cofounder of Tendermint,
and long-time HN lurker.

~~~
quantumwoke
It's generally considered good Hacker News etiquette to declare affiliations
on your posts.

~~~
jaekwon
Thank you, does that look good?

~~~
quantumwoke
Thank you :).

------
realpeopleio
I agree that having a few walled gardens is a problem, but having more smaller
walled gardens probably won't change much. Bad actors can still sign up, post,
spam, harass, etc. on several small walled gardens. What we need is a way to
better identify and handle bad actors.

The internet in the 90s seemed more fun and more open. Perhaps it's because
the only people really participating were not interested in abusing sites or
people. It was mostly nerds sharing nerdy things. Once money got involved, and
free everything was available, it turned into this soup of bots, trolls, AI,
fake this and that, big money, swaying public opinion, and gross content.

Smaller sites and discussion boards have been at a disadvantage recently
trying to fend off spam, bots, and sock accounts and very often lose out to
the big sites. Effectively controlling abuse and doing it a cost-effective way
can be very hard.

RealPerson.io ([https://realperson.io](https://realperson.io)) was created as
a way for websites to verify that a user is a real person, but without
disclosing personal details about the user. Each person can create a single
account on RealPerson.io and then can create separate, randomly-generated
codes for each site they use. Websites register on RealPerson.io and then for
each user signup on their website they simply asks the user for their
RealPerson code for their website. A backend REST call to RealPerson.io with
the user code for the site returns "yes" or "no." Sites don't share codes so
you can't track across websites. No shared logins or authentication code. Bots
would have to pay for a code for each account which would be cost prohibitive
to run a bot farm.

The first implementation of this approach was done with RealPeople.io
([https://realpeople.io](https://realpeople.io)) which uses RealPerson.io to
verify that the user is real.

~~~
pjc50
> Each person can create a single account on RealPerson.io

That web page doesn't explain how they guarantee uniqueness of users or pay
for the required ID checking?

~~~
realpeopleio
It doesn't really guarantee uniqueness. It also doesn't verify identity (that
this is really "John Smith"). It essentially verifies that the user has paid
for an account on RealPerson.io, which is currently $9/year. This was to
strike a balance between having to divulge too much personal info to
RealPerson.io (like an identity verification service), but at the same time
making it highly unlikely that the user is a bot. RealPerson.io doesn't try to
drive the percentage to 0 that the user is a bot, but rather make it cost
prohibitive for bots, while at the same time making it cost effective for real
people, and still protect privacy.

Also, a user only gets one RealPerson code per website, so a user can't create
multiple codes for a website and hence create multiple accounts on that
website. And if that website bans the user, that user would need to get a new
code for that website, which would mean creating a second account on
RealPerson and paying again and face the scrutiny of RealPerson detecting that
the user has two accounts on RealPerson.

------
woranl
Unlike in the past, people don’t build websites for fun anymore. I remember I
used to visit topsite ranking sites just to explore what other had built.
Nowadays, it’s all templated design hosted in a walled garden.

~~~
CM30
Sadly, I don't think many people do anything for fun anymore. Feels like every
site/channel/project/venture now is done with the possibility of a career in
mind, and nothing online is made without some sort of extrinsic benefit to the
creator.

Seems like the idea of a hobby in general has died off sometimes.

------
fragmede
Who's played with Glitch.com already? The author of the linked article credits
Anil Dash's plan to reintroduce the building blocks of the web as having a
greater chance of success, and one of those building blocks is glitch.com.

Playing with Glitch.com, it's possible to see a way forwards that isn't just
iterating a better UI on top of an RSS reader. It's still lines of code as
opposed to trying to push some GUI-based programming language, but does very
well to promote the idea of building blocks for the web.

If most of us are basically plumbers but for data on the Internet, glitch.com
is the consumer hardware store where they sell all the various fun pipe
connectors that you'd use for building an epic ball machine as a kid.
(Github.com would be is the store with a catalog that has table after table of
parts in every possible variation.)

------
kilo_bravo_3
>We'd also need to avoid many of the privacy and security flaws that were
rampant in the early internet, and for which we're still paying.

How many third (and first!) party trackers would you guess were blocked by my
privacy and security flaws-preventing browser plugin when I clicked on the
link to read the story?

I'm sure they "add value" to my experience and I am missing out on all of the
incredible opportunities that they present, but that's a chance that I'm
willing to take.

That website even (attempts to) logs when and where you click and what text
you highlight, and then tries to send that info off to a third party.

The Vice President of Content Strategy for ORM, Inc. seems to be advocating
against content strategies he has implemented.

Is "society made us do it" still a thing?

------
zaarn
The problem usually lies in UX.

Most projects attempting to "fix the web" as admirable as it be, suffer for
pretty bad UI. Usually because the UI designer was also the developer.

I'm also guilty of this.

A recent project that got around this was Mastodon and Pleroma, both have
pretty decent UI/UX.

~~~
jsilence
zeronet has a pretty decent UI.

------
EGreg
We did it. Started in 2011, and it took us 7 years, and over $500K invested
from our revenues but we did it.

Shameless plug for our solution:

[https://m.youtube.com/watch?v=pZ1O_gmPneI](https://m.youtube.com/watch?v=pZ1O_gmPneI)

It’s FOSS so download and use it just like Wordpress. Feel free to email me at
greg (qbix.com) if you want any help getting started and I will either
personally help you or connect you with one of our developers on the team.

Tim Berners-Lee is leading another project called Solid, that started about 3
years ago. We are a bit ahead because we started much earlier.

------
blunte
One big problem is search. Search determines what we see (outside of what we
learn of from our peers or primary sources of info).

A "cacophony of different sites and voices" may be more fair, but it isn't
very useful. Most people shouldn't have a public voice, because most people
don't have anything valuable (or accurate) to say.

This includes me, of course, on many subjects. But here on HN, if many people
think that my comments are garbage, they vote me down and at least new readers
will be somewhat forewarned that my content may be of questionable value.

Search was first supposed to help us find things. But when there were too many
things to find, it should have helped us identify the good stuff. Instead,
search is now 90% bogus results that are generated to help steer people to a
particular product or service.

So honestly, some walled gardens where people reasonably knowledgeable about
the local subject can affect the visibility of the content within the silo, at
least visitors less able to discern good from bad will probably see more good
than bad.

Of course, sites need money to operate, so then comes the corruption of the
system. Even sites that try to be neutral still end up making some concessions
in order to get funded by some source (and the corporations or political
groups have the money, so they end up with influence).

------
scandox
I feel like he's worrying but he isn't sure about what. I don't think the
problems of the internet are related to how easily people can make their own
sites. It's just that the feeling that your content might actually be seen and
appreciated is gone. In the 1990s if you published something online it felt
good like some people are going to read this and care. Now it feels like "hey
another ball tossed into the massive gaping void".

Part of this obviously is about search and curation. Part of it also is
probably just the massive loss of intimacy on the web now that it is so big.
It's easy to _feel_ less relevant when you get a few hundred views, even
though in the 1990s you might also have got only a few hundred views.

So what I'd like is to have a new intimate internet. It would be effectively
searchable by a Google equivalent that only indexed a small subset of the
internet. How exclusive it would be, how exactly it is curated and managed I
have zero idea. I'm just saying that is what I would like.

Now HN: unleash your list of stuff like this that already exists.

~~~
edraferi
You want an “intimate Internet” with its own search capability to restore the
feel of the early web? Probably we should make sure it’s easy for less
technical people to use. Needs a good story around mobile and rich media.

Oh dear, we’ve reinvented Facebook!

Seriously though, I think this is more about community than technology, and
more about channel saturation than centralization.

I think the best shot for what you want is the re-distributed web and niche
social networks (subreddits?). The peer-based web stuff will give you that
feeling of being ahead of the masses on tech, and subreddits for your favorite
hobby will let you connect with people who care about your random thing.

------
eddieschod
"The web was done by amateurs." \- Alan Kay

~~~
kabes
The context of that quote had nothing to do with what this article is about.

~~~
psyc
I really don't care, because it's important enough to repeat in any discussion
about rebuilding the Web. Involve some real systems people. Design a real app
platform, with multi-language support, especially compiled languages. Have a
cohesive end to end development story.

~~~
bitL
[https://en.wikipedia.org/wiki/Second-
system_effect](https://en.wikipedia.org/wiki/Second-system_effect)

~~~
psyc
If current web stacks aren't bloated and over-engineered, then I don't know
what would qualify.

~~~
bitL
Imagine it would get even worse ;-)

~~~
psyc
It’d be nice if there was a possibility of real competition at the basic
platform level. The browser model is so entrenched, yet it still amazes me
that we’ve been stuck with it for two decades.

------
nailer
View Source still exists, the presence of JavaScript and CSS means that HTML
is more meaningful (as it's it's free of display information and behaviour).
But looking at the DOM is the inspector is even more useful to undertsanding
the tree structure.

I agree with the sentiment of the article, but developer tools are better, not
worse, than they were 20 years ago.

~~~
jacquesm
Developer tools are better mostly because webpages are 100's of times more
complex now than they were in the past whereas the typical amount of
information on the page is still roughly the same.

Understanding how a modern webpage source in combination with a browser
results in what you see is a pretty complicated affair, in the past you just
did 'view source' and it was obvious what went where.

The web is now much more machine-to-machine than it was in the past when the
humans could process the pages roughly as easily as the machines could.

~~~
nailer
The point I was making was that web pages in terms of HTML _aren 't_ more
complex than they were in the past. An HTML page now often has a smaller
amount of elements and element attributes than the equivalent page created
using older tech because every element doesn't have styling hacks and behavior
added to them. HTML (stuff like GAE or BEM aside) is purer than ever.

~~~
jacquesm
Unfortunately removing the 'styling hacks and behavior' has split the pages up
into many different components that all interact in quite complex ways. The
HTML being 'purer' has not made things simpler at all. Now there are many
places rather than just a few where style and functionality can be modified.
And that can obscure intent quite quickly. As opposed to <b>this is bold
text</b> which any fool (this one included) could understand.

There are obvious benefits to separating layout, style, software and content.
But it isn't categorically true that this will always result in something that
is easier to understand by a human.

~~~
nailer
> Now there are many places rather than just a few where style and
> functionality can be modified.

Styling is done in style sheets, functionality is done in JavaScript (with the
exceptions already mentioned). Trying to create, say Hotmail in 2018 would
result in a much better to understand and debug app than the 1998 version was.

~~~
jacquesm
For a seasoned programmer, sure. But a layperson had a much better chance of
understanding an early 90's webpage with the exact same content as a modern
day one.

~~~
nailer
I specifically mean for a layperson.

    
    
        <div class="inbox">
          <div class="summary">
             <q>Hi John how's things?</q>
        (close all the things)    
    
    

Is way better than

    
    
        <td key="value">
          <td key="value">
             <img width=1px height=1px class=shim/>
             <td key="value">Hi John how's things?</td>
        (close all the things)

~~~
jacquesm
Actually, it isn't. Because it left out all the moving parts that are now in
the CSS. That second example is complete, the first one isn't, the second one
gives me a fair idea of what the result should look like, the first could look
like anything (even a blank page...).

~~~
nailer
Yes, it seperates style from presentation. That's a great way to get an
understanding of the structure and content of the document and seperate
concerns. 'Table data holding table data' does not do this well at all.

Not responding further after this because `Actually, it isn't.` isn't a
setence typically found in a polite adult conversation.

------
mbaha
“Any city gets what it admires, will pay for, and, ultimately, deserves,” the
New York Times editorialized in October 1963, as demolition of the old Penn
Station began.

I suggest a corollary:

"Any web community gets what it admires (fancy UX and feeds that work like
magic), will pay for (=nothing), and, ultimately, deserves."

------
naranha
SPAs have made almost all desktop applications obsolete. DevTools have
deprecated view source. Through progress and innovation the web can do a lot
more than 20 years ago.

Sure the web is being abused for tracking by Google and Facebook but that was
also possible 20 years ago with a single img-tag.

~~~
goatlover
> SPAs have made almost all desktop applications obsolete.

Meh, Office and Adobe products still have widespread use, as do various code
editors and related tools, along with specialized applications like SPSS or
Matlab. Also, things like Slack are desktop, even though they utilize web
technologies. And there remain plenty of PC games which still get made. Some
things you don't want to run in a browser.

~~~
harperlee
Isn't it office already a "local webapp", under the hood? You can open any
document in-browser through SharePoint server, have (almost?) full
functionality, all documents are XML... There are lots of little signs that
Microsoft is rewriting everything to be seamlessly used through a browser, and
that when you open it locally you just don't notice that you are using an
"office browser". Even Visual Studio has also web tech under the hood, I
think.

~~~
goatlover
You still download and install Office 365 as applications and run them
independent of a web browser and can do so without an internet connection.
They also have their own updater. Yes, you can run them on the web from the
Office 365 portal, but that's the html version.

The installed version is still fundamentally a desktop platform. I'm pretty
sure it's written C# or C++ and runs on dotnet.

An SPA is a web page run in the browser over http(s). Anything else is
desktop, server, or mobile app, regardless of what tech it's made from.

~~~
naranha
> An SPA is a web page run in the browser over http(s).

I'd argue that if they are using web technologies (like vscode) they are still
profiting from web innovations. Isn't electron just a way to distribute SPA on
the desktop (plus some integrations).

~~~
naranha
> Sure, but the way I look at it is if the application runs on the desktop,
> server or mobile OS, then it's not a web app.

Yes perhaps your definition is clearer. But the point remains that "web
technolgies" have expanded tremendously in scope. And that it does not suffice
anymore to look at "View Source" to become a web developer.

Because today the field of web development includes things like electron and
you want to bring your application to as many devices as possible, with as
little development overhead as possible. And this is what users expect, it's
just not possible with the 90-style web.

------
rafaelvasco
It all breaks down to the problem: Everything today runs on Money , Power and
Control, instead of Collaboration and Sharing. We we'll get there eventually.
The world like it's running today is not sustentable and will collapse. Just a
matter of time.

------
amelius
> Few walled gardens

That sounds surprisingly like how our economy ends up working.

Perhaps we need a broader view?

------
LukeB42
Entities that aren't media corporations tend to suck at holding a crowds'
attention relative to media corporations.

You can't <airquotes> fix </airquotes> that by "rebuilding the web".

------
bluetwo
If the value network of a market evolves different than the way it was
"supposed to be", the problem is with your assumptions, not the technology.

You're not going to fix the problem by changing the technology.

------
mtgx
Isn't WebAssembly basically moving the web in the opposite direction?

~~~
trumped
yes... WebAssembly is even worst then Angular (They both obfuscate the web)

------
nurettin
While we are reinventing the web, why don't we have independent accreditation
institutions which regularly verify the truth behind websites ? Finance seems
to have solved this problem.

------
tzs
There's an amusing parallel between housing in major tech cities and the net.
In both cases you've got the people who were there earlier and want the
character of the place to remain mostly small entities like it was when they
got there, and the people who came later and want to change the place to be
dominated by large entities that will change most of the place's character.

It's amusing because the tech crowd are opposing the NIMBY people in meatspace
but _are_ the NIMBY people in cyberspace.

------
supermdguy
> You can't become a web developer by viewing Facebook's source; but you might
> by looking at a new site that isn't weighed down by all that CSS and
> JavaScript.

I agree that web bloat is a problem, but why is it the responsibility of web
developers to teach users of their site how to code? Is there any other area
of software development where that's the case? Even major FOSS projects aren't
designed to easily enable new developers to "copy the code they want".

------
dschuetz
That's the narrative you get if you say that the internet, the web and its
users are broken: "Does it affect me? No. Does it still work for me? Yes. Is
the internet broken? For users who don't know how to use it, yes."

So, why even bother arguing? Everyone lives on their special little islands of
some niche web/IP semi-commercial projects and do not particularly care about
ads or cookies, because whatever.

------
tomcooks
It's almost as if someone's feeling bad for having given up their own
indipendent e-store to favour a certain DRM ridden publisher.

------
peterwwillis
I just came back from China. I could not message my loved ones, check my mail,
update my social media, or listen to internet radio. This was not Facebook's
fault, it was not even China's fault (though they were the ones blocking me).
It was the fault of the way we use existing technology.

So Facebook is banned. Fine. That shouldn't stop me from being able to at
least send data somewhere that will end up at Facebook. And I should have been
able to send an e-mail that bounced through intermediary relays without
connecting to Gmail's servers.

A lot of the web that _could_ be peer to peer isn't. And it's not because it
is difficult, but because we simply haven't decided to use it that way. Email
is the oldest, most successful distributed decentralized application. It does
not require complex consensus. It does not use distributed databases. And it
just works. You don't even need to be connected to the end server. There's no
reason the web couldn't work like this. It is feasible today.

On top of this, where your web content is stored should be simple: it should
be stored on a device in your pocket, and cached and distributed on servers
around the world, just like email already is. Again, no complex algorithms are
needed for this. A very simple design could accomplish this with existing
technology, and perhaps an extension to a protocol's RFC and an update to some
core software.

~~~
matthewmacleod
_This was not Facebook 's fault, it was not even China's fault_

Absolute shite. It's 100% China's fault. Given the widespread, damaging,
lingering ramifications of the lack of security on the Internet in general,
the _absolute last thing we should be doing_ is promoting the idea that this
information and these connections should be less secure.

I am baffled that anybody could even propose this idea.

~~~
peterwwillis
Well, I propose it because 1) HTTPS is a tiny aspect of what makes one secure
on the web, 2) I just want to talk to my loved ones on the web, 3) I don't
care what China does with my data, 4) "lingering ramifications" of this are
very manageable.

Maybe I have a skewed point of view, but after being involved in computer
security for 15 years, I literally have two cases at all where I care about
the data I'm sending over the internet: logging into my bank, and making a
purchase with a credit card. Virtually everything else, I don't care about.

------
cromwellian
Everything gets centralized when there are economies are scale, technology
stack fixes can’t change that.

Look at the ultimate “decentralized” technology, blockchain. The majority of
mining power is now centralized in vast centralized farms with custom hardware
and cheap access to power and half the Bitcoin is held by about 1000 people.

------
kobrad
Nice that this website doesn't respect opening in new tab. Is this some ironic
take or what?

~~~
tannhaeuser
Worse, linking to Amil Dash's article on medium [1], of all things, makes
complaining about oligopolization of the Web kind of inconsequential.

[1]: [https://medium.com/@anildash/the-missing-building-blocks-
of-...](https://medium.com/@anildash/the-missing-building-blocks-of-the-
web-3fa490ae5cbc)

------
tokyo_fame
I'm curious whether such behavior of the web follows some formal law. Is there
a relevant proposition in graph theory, Ramsay theory may be? Is the
probability of decentralization bounded by system's complexity?

------
PuffinBlue
They could start the ball rolling by adding an RSS feed to their own website.

~~~
icebraining
[http://feeds.feedburner.com/oreilly/radar/atom](http://feeds.feedburner.com/oreilly/radar/atom)

------
guilamu
"That project is only likely to succeed if the rebuilt web is compatible with
what we have today, including Facebook"

Yeah the ONLY way forward is to get everything compatible with FB. Sure.

------
lxe
So the technologies such as blockchain are too complicated and require
significant UX work, yet “view source“ and copying and pasting HTML code is
somehow a better user experience?

------
eksemplar
I find these articles a little ironic. I absolutely agree with the message,
but they are always published in walled gardens themselves. Maybe that’s how
you get reader counts and views but there is is really nothing that stops
anyone from building their own.

It’s easier to use Wordpress than building your own, even easier to write a
blog entry for O’Reilly.com but if your message is a decentralized network of
enthusiasts, then maybe you should publish on your own platform, you know, to
be the change you want.

~~~
klez
Uh? In this case I'd say it's published in the correct place. The guy works
for O'Reilly, the post is about his work, it's published on O'Reilly's
website.

I would have agreed with you if it was published on Medium or Blogger or
whatever.

------
intrasight
I'm predicting a bleak, dystopian future which will make our current imperfect
web look like a golden age of tech.

------
_bxg1
Raw HTML pages would instantly become 10x more palatable if browsers would
make the default font sans-serif

------
macspoofing
If you want the internet for the wealthy (and Western) technocratic minonority
then that's what you do. If you want the internet to be used by 7 billion
people, then Facebook (or something like Facebook) it is.

I've become a believer of 'walled gardens' after watching my elderly aunt
struggle with PCs and websites for years, until I set her up with an iPad,
Facebook, and Skype.

------
Trillinon
I've always felt that Google Reader held RSS back by not supporting feed
authentication. It was the defacto reader when RSS was growing, so what it
supported defined what was possible with RSS.

Feed authentication would allow for a variety of potential services business
models.

    
    
        - Paywalls: Whether subscribing to a newspaper to supporting a blog through
          Patreon, it lets user directly pay for content provided through RSS.
    
        - Customized curation services: Whether manually curated by an editor, or
          using algorithms, you could build a business around providing content to
          individual users.
    
        - Social network integration: Instead of joining one or two massive Social
          networks, I could be involved in dozens of small, focused ones, all
          collected in a single RSS reader.
    

I'm sure there are plenty of other possibilities. But for it to be useful, it
has to be available and easy to use in all the major RSS platforms.

------
hasa
Blockchain mentioned! Where are other useless buzzwords ?

~~~
JPGalt
cloud, deep-learning, robotics, augmented reality, computer vision, actionable
analytics, RUSSIAN BOT, Internet of Things, big data, agile, design thinking,
freemium, gamification, incubator, lean, SaaS, thought leader...this is enough
because thinking about this is making me lose all respect for humanity.

------
sidcool
What happened to the Seif Project?

------
lucasnichele
there is nothing new under the sun. We must pay more attention to new and
validated technologies, like blockchain.

------
rasengan
Conway’s Law

------
fimdomeio
I've been toying with the idea of creating a web 0.5 that consists mostly of a
meta tag and a piece of css (body:after{...}) that hides the contents while
directing users to use a textmode only browser. Rules would be page is valid
if it doesn't contain css or js. I think this could be an incredible fun
experiment somewhat in the same style as the tilde club. If it became popular
enough who knows... maybe the community could even create some sort of web
rings :O

~~~
lioeters
I like the idea of a text-only browser, back to the basics. Oh and I miss web
rings!

~~~
AnIdiotOnTheNet
I'd kinda like to reinvent the "web" as a TUI. Like, every site is just a
program compiled to a portable bytecode that interfaces in terminal codes and
you'd "browse" it with a modified telnet client. No more bullshit styling you
can't read supported by 200 javascript dependencies that refuse to let you
click buttons if you have the audacity to be behind a dns filter, no more
style over substance, no more animated gifs, no more advice animals, no more
endless over-engineering to see who can do the least with the most resources.

~~~
zimpenfish
Sounds a bit like Gopher. Which didn't really catch on.

------
arippberger
PiperNet

------
quantumwoke
The article mentions the demise of Google Reader (rest in peace) as a turning
point for the spread of fake news. I agree that there needs to be some
counterweight to the deluge of news designed to influence public opinion.
However, I don't think the tool to build that with is RSS.

I recently learned about jsonfeed [0] from Hacker News and built a news
aggregator for my own personal use. I believe the future of news content is in
aggressive, hackable aggregation tools for a variety of content. jsonfeed is a
solid foundation to build on :).

[0] [https://jsonfeed.org](https://jsonfeed.org)

~~~
bhaak
That makes no sense.

RSS and jsonfeed are just containers. Both may contain real or fake news.

Everybody can put up a feed in RSS or jsonfeed without any trouble.

~~~
quantumwoke
jsonfeed is a much simpler and more hackable protocol than RSS. It's much
easier to build scalable and sustainable aggregators that filter fake news in
jsonfeed without the cognitive overhead of RSS. Without hackable tools we end
up with the walled gardens of Facebook and Google Reader.

~~~
bhaak
You sound like a buzzword machine.

A simple RSS feed can easily be hand written. But of course, you don't do that
because there are so many mature tools to create or manipulate an XML feed
already.

Whereas with jsonfeed there is not much yet because it is so new.

If you can hack JSON, you can hack XML as well. Even programmer novices can do
this.

Edit: Reworded the last sentence that could be interpreted as a personal
attack. Wasn't meant as such, I was going for the impersonal you.

~~~
quantumwoke
Please be civil :). Avoiding ad hominem attacks is more conducive to
discussion.

I have already justified my preference for jsonfeed. XML has proved many times
to decrease developer productivity. To my knowledge open source projects that
have dropped or diminished their use of XML have only become more popular over
time. Extrapolating a little bit, switching to JSON can only be a good thing.

~~~
grzm
A key difference now compared to the early days of XML and RSS is there
weren't great libraries out there to handle the processing, both output and
input. Many people were rolling their own. That definitely wasn't great. It
was similar to using JavaScript prior to the widespread adoption of JQuery.
However, that's not the case anymore. I think there are possibly a lot of
benefits to recommend jsonfeed over RSS (or, a better solution, in my opinion,
Atom), but you can build yourself a bit of a straw man by ascribing the
problems of early XML usage with the situation today.

