
We Must Revive Gopherspace (2017) - stargrave
https://box.matto.nl/revivegopher.html
======
icebraining
HTML ain't the problem; you can build websites without tracking. If you
somehow managed to pull enough users to Gopher, they'd just write Gopher
Chrome and start adding new features that conveniently allow tracking into it,
and gradually kill off the original protocol (see EEE). The problem is
economic, and the solution must be too.

~~~
rixrax
I’m old enough to have used gopher on vt100 terminals as an undergrad in
college to try and do some ‘work’. And when http/www arrived, it didn’t take
long to switch to a better mousetrap. And this wasn’t just because you could
now render a gif in NCSA Mosaic on indigo workstation. Everything was just
better in this new http world.

Let’s fast forward to today, yes, we’ve gone overboard all over, but then
again, Gopher [i think] doesn’t come standard with TLS, it hasn’t gone through
the evolution that http[s] has that makes it a robust and scalable backbone it
is today.

What I’m trying to say is that we should not casually float around pipe dreams
about switching to ancient tech that wasn’t that good to begin with. Yes,
electric cars were a thing already in early 1900s, and we maybe took a wrong
turn with combustion engine, but with Gopher, I think we should let the
sleeping dogs lie, and focus on improving next version of QUIC, or even
inventing something entirely new that would address many of the concerns in
the article without sacrificing years of innovation since we abandoned Gopher.
Heck, this new thing might as well run on TCP/70, never mind UDP appears to be
the thing now[0].

[0]
[https://en.m.wikipedia.org/wiki/HTTP/3](https://en.m.wikipedia.org/wiki/HTTP/3)

~~~
ethbro
Well said. The article conflates a content / rendering problem with a protocol
solution.

A lightweight HTTP/TLS subset that severely limits client-side execution
expectations would seem to accomplish the same goals.

While repurposing all the amazing tech we've built since the 1990s.

Essentially, "just pass me the bare minimum of response to make Firefox Reader
View work."

... but then we wouldn't be able to serve high-value targeted ads, would we?

~~~
redahs
If the desire is to make online content more readable, it might be worth
starting with the assumption that all content downloaded from the network will
be read on a black-and-white ereader device with no persistent internet
connection.

This assumption might require substantially reworking the hyperlink model of
the internet, so that external references to content delivered by third-
parties is sharply distinguished from internal references to other pages
within the same work.

~~~
ivan_ah
Your idea of hypermedia with an offline browsing assumption is very good!
Imagine an "offline archive" format that contains a document D + a pre-
downloaded copy of all referred documents R1, R2, ..., Rn, along with
necessary assets to render R1,R2..Rn in some useful manner (e.g. save html +
main-narrative images from each page Ri, but skip everything else).

This "offline archive format" has numerous benefits: (A) Cognitive benefits of
a limited/standard UI for information (e.g. "read on a black-and-white ereader
device"), (B) Accessibility: standardizing on text would make life easier for
people using screen readers, (C) Performance (since accessing everything on
localhost), (D) async access (reaching the "edge" of the subgraph of the
internet you have pre-downloaded on your localnet could be recorded and queued
up for async retrieval by "opportunistic means", e.g., next time you connect
to free wifi somewhere you retrieve the content and resolve those queued "HTTP
promises", (E) cognitive benefits of staying on task when doing research (read
the actual paper you wanted to read, instead of getting lost reading the
references, and the references's references).

I'm not sure what "standard" for offline media (A) would should target... Do
we allow video or not? On the one hand video has great usefulness as
communication medium on the other it's very passive medium, often associated
with entertainment rather than information. Hard choice if you ask me.

I'm sure such "pre-fetched HTTP" exists already of some sort, no? Or is it
just not that useful if you only have "one hop" in the graph? How hard would
it be to crawl/scrape 2 hops? 3 hops? I think we could have pretty good
offline internet experience with a few hops. For me personally, I think async
interactions with the internet limited to 3 hops would improve my focus—I'm
thinking of hckrnews crawled + 3 hops of web content linked, a clone of any
github repo encountered (if <10MB), and maybe doi links resolved to actual
paper from sci-hub. Having access to this would be 80%+ of the daily "internet
value" delivered for me, and more importantly allow me to cutoff from the
useless information like news and youtube entertainment.

update: found WARC
[https://en.wikipedia.org/wiki/Web_ARChive](https://en.wikipedia.org/wiki/Web_ARChive)
[http://archive-
access.sourceforge.net/warc/warc_file_format-...](http://archive-
access.sourceforge.net/warc/warc_file_format-0.16.html#anchor49)

~~~
ethbro
The issue is this thrashes caching at both the local and network levels,
decreases overall hit rate, and doesn't scale as links-per-page increase.

How many links from any given page are ever taken? And is it worth network
capacity and storage to cache any given one?

------
yoz-y
How about reviving the “blogosphere” instead? Does it even need reviving? Most
of the personal or tech blogs I visit do not have heavy ads or tracking on
them, still offer full RSS articles and so on. People who care still have a
lot of nice web sites to go to.

Maybe what we need is a search engine that penalises JS and tracker use.

~~~
superkuh
The web of the 90s is alive on tor. On tor the idea of running third party
executable code in the age of spectre is (properly) seen as absurd. We just
need to bring back webrings and we'll be set.

Since it's on tor there's no need for evil centralization for DoS protection
since it's baked into the protocol. Additionally your onion vanity name you
brute forced cannot simply be taken away from you if there's political or
social pressure on your registrar or above.

No, we don't need gopher. We need people to stop running third party code like
it's some normal thing. We need devs to stop making websites that don't render
unless you run their code.

It's really not that hard to run a hidden service. No harder than running a
webserver. And everyone's home connections are fast enough now.

~~~
pard68
Color me intrigued. How do I get started? (I have the Tor browser but can
never find anything worthwhile, reddit just talks about illegal stuff)

------
psim1
Why not just serve static text over HTTP? At least then you'd have the ability
to inline images. This--the use of Javascript and other technology for
tracking purposes--isn't a problem for Gopher to solve. It's a problem for web
content creators.

~~~
okl
Correct, here are some tips for content creators how a website can look like
without all the bloat:

[https://motherfuckingwebsite.com/](https://motherfuckingwebsite.com/)

[http://bettermotherfuckingwebsite.com/](http://bettermotherfuckingwebsite.com/)

[https://perfectmotherfuckingwebsite.com/](https://perfectmotherfuckingwebsite.com/)

[https://thebestmotherfucking.website/](https://thebestmotherfucking.website/)

~~~
rdiddly
Hadn't seen the last two. There's a slippery slope effect going on.

~~~
runeb
Really, I feel like its showing 0.01% of the process that got us to where we
are today.

------
peatmoss
Gopher is a really fun (and constrained) protocol. I’ve experimented a bit
with interactive gopher servers in the past.

A cool thing is that you can build a server in an afternoon starting with
nothing more than your favorite programming language, some TCP server docs,
and the wikipedia page.

I’d love to see people build some gopher sites to do stupid and crazy things.
Interactive fiction over gopher? Sure! SQL to gopher gateway with ascii viz?
Awesome!

Everyone should have a gopher hole... probably firewalled off of any
production networks.

------
oso2k
I always like to refer to Ian Hickson's Requirements for Replacing the Web [0]
when this topic comes up. They seems to encapsulate well the social,
technological and economic dynamics required when discussing replacing the
Web. However, few attempts (Crockford's Seif Project [2], MS's Project
Atlantis and Project Gazelle [2][3][4][5]) seemed have to heed this wisdom.

[0]
[https://webcache.googleusercontent.com/search?q=cache:8zGGJQ...](https://webcache.googleusercontent.com/search?q=cache:8zGGJQ5VxwEJ:https://plus.google.com/%2BIanHickson/posts/SiLdNL9MsFw+&cd=1&hl=en&ct=clnk&gl=us)

[1] [http://seif.place/](http://seif.place/)

[2] [https://youtu.be/1uflg7LDmzI](https://youtu.be/1uflg7LDmzI)

[3] [https://mickens.seas.harvard.edu/publications/atlantis-
robus...](https://mickens.seas.harvard.edu/publications/atlantis-robust-
extensible-execution-environments-forweb-applications)

[4] [https://www.microsoft.com/en-us/research/wp-
content/uploads/...](https://www.microsoft.com/en-us/research/wp-
content/uploads/2016/02/gazelle.pdf)

[5] [https://www.microsoft.com/en-us/research/blog/browser-not-
br...](https://www.microsoft.com/en-us/research/blog/browser-not-browser/)

~~~
jancsika
> I always like to refer to Ian Hickson's Requirements for Replacing the Web
> [0] when this topic comes up.

It's not that hard. Just iterate on any idea old that's even slightly more
appealing to hack on than a full-blown browser. That includes... let's see...
nearly anything!

Then just be smart and dedicated about specifying the behavior of the new
thing and figuring out workarounds for the awful parts.

Ian Hickson did it[0].

[0]
[https://webcache.googleusercontent.com/search?q=cache:8zGGJQ...](https://webcache.googleusercontent.com/search?q=cache:8zGGJQ5VxwEJ:https://plus.google.com/%2BIanHickson/posts/SiLdNL9MsFw+&cd=1&hl=en&ct=clnk&gl=us)

------
brownbat
I've half seriously evangelized a few times here for a .text TLD.

It wouldn't solve everything, but would make a nice playground that might be
taken interesting places.

------
MistahKoala
The article discusses reviving gopher, but doesn't mention how to access it
(sure, I could invest a bit of time and effort googling how to do that, but
that seems beside point for an article evangelising its revival).

~~~
Joe-Z
Yes, I was a little disappointed by that too. It even has a "gopher://..."
link at the end and when I click on it I can't even open it. Just tell me how
I can open the one example you provide man!

~~~
JdeBP
There was a time when gopher: scheme URLs would _just work_ , because GOPHER
support was built in to popular WWW browsers. Netscape Navigator and Internet
Explorer both had it, for example.

It's not that gopher: is some novelty that no-one has ever adopted. It's that
a WWW browser _nowadays lacks quite a lot_ of things that used to be commonly
built-in to WWW browsers. gopher: scheme support has gone completely, as has
news: support. ftp: support has been reimplemented several times, and is
significantly poorer now than it used to be.

* [http://jdebp.eu./FGA/web-browser-ftp-hall-of-shame.html](http://jdebp.eu./FGA/web-browser-ftp-hall-of-shame.html)

------
fimdomeio
I was toying with an idea a while back of making sites just for non visual
browsers. There was basically just a piece of css, blocking the visualization
of content and letting users know this "this is a web 0.5 website. This site
is best viewed in a terminal". The enforced rules where kind of a gentleman's
(gentleperson) code of no css no js.

Conclusions I got is that the thing had crazy fast loading (it even weird when
you can no longer distinguish local for server), that it would be actually
quite enjoyable coding experience has it's suddently is just 50% of the work
and that the rendering of web pages in terminal browsers is actually really
nice.

~~~
pmlnr
It called txt files.

Good example:
[http://textfiles.com/magazines/LOD/lod-1](http://textfiles.com/magazines/LOD/lod-1)

~~~
sp332
GP is talking about text that is hidden in a normal web browser and only
visible in something that ignores CSS.

~~~
andai
Like putting the entire website in the noscript tag?

------
irth
> Gopher is not HTML

Gopher can easily serve HTML content (and any other content type, too)

I made a Gopher HackerNews proxy a few years ago, you can see it in action by
running

    
    
        lynx gopher://hn.irth.pl
    

and check out the source at
[https://github.com/irth/gophernews](https://github.com/irth/gophernews)

~~~
spc476
There's also gopher://hngopher.com/

~~~
irth
Oh, this one's pretty cool. Makes me wanna do a similar thing for another
website that's as nice looking.

------
tambourine_man
>Gopher is a much feature-less protocol than html

HTML is not a protocol, that's HTTP.

------
diminish
We need a new mode for Fırefox an extremely restricted form of html5 without
javascript, call it html0.

<doctype html0>

No JS, no thirparty content, only html5+, css3+, text, images, videos, audio
and other stuff.

~~~
Casseres
If you have the ability to set the doctype of a page, don't you also already
have the ability to not load thirdparty content?

~~~
ZiiS
The point is to allow the User Agent to display an icon meaning "This page
says it dosnt need third party media so will be prevented from loading any"

~~~
krapp
That seems like a silly reason to have a restricted subset of HTML. Just show
that icon on existing HTML pages which don't include third party content.

------
rum3
How about not developing sites that break when JS is turned off? Why has it
become a standard to make websites completely in JavaScript when it brings
nothing positive to the table whatsoever? Who came up with this idiocy?

------
TheRealPomax
As someone who grew up with a 1200 baud modem and never used gopher: why would
I start using gopher? What even is it? Can I use it to host webpages? It
sounds like if "tracking is impossible" it probably can't use html+javascript?
Why would I want to use that?

~~~
welly
There is no good reason to use gopher other than for nostalgic reasons.

~~~
classichasclass
Gopher menus are quick to parse and interact with even on very constrained
systems, and the protocol is very simple. Given some of the other responses in
this thread, I don't think those things are simply "nostalgia."

I think you could argue that gopher has _few practical uses,_ and while (as a
Gopher user) I don't personally agree, I think the position is defensible
depending on what your use cases are. But Gopher is a good example of how a
minimal protocol can still offer services of some reasonable basic
functionality, and I think that's worth something more than reminiscence.

~~~
enneff
I love the Gopher protocol but there is no practical difference between a
gopher service and a web server serving directory listings and files.

~~~
classichasclass
You're excluding gopher menus themselves, which can be customized and act as
miniature documents, and search servers, which can take queries and offer
basic interactivity.

------
jboy55
I did a hackday project where I wrote a script that converted our intranet at
work into a gopher site. Before I started, I was really enthused about it, but
once I started, it just became evident how much a kludge these early protocols
were.

Its the COBOL of page description languages. Its truly horrible, its not like
HTML was just this minor improvement, its a complete conceptual shift. GOPHER
is just a tab delimitated file, so excel is the best editor for it.

The first character is the type of thing, it can be a submenu (1), a text
doc(0), a gif (g), an image (I), a binary file (b), a bin-hex file (4) or it
can tell you the name of a mirror server so you can load balance?? (+).

How do you take form input like a a street address? You can't, its one way
data transfer.

~~~
classichasclass
Item type 7 is how you handle queries, and how things like Veronica were
implemented, so it's not one way. It's definitely constrained, but not
impossible. There was also item type 2 for CSO, though that is much less
common.

Gopher+ had ASK forms which were much like HTML form controls but were, like
much of Gopher+, complex to implement and not widely adopted. Some recent
clients and servers support arguments over stdin like POST requests, but this
too is not widely implemented.

~~~
jboy55
Most of these tools like Archie and Veronica seemed like magical services when
I started cruzing the internet. I recently learned that Archie when it started
was nothing more than a collection of `ls -lR`s of various ftp servers that
would be searched with grep. Which, in hindsite, was the obvious unixy way of
doing it.

------
amiga-workbench
I'd rather have a conservative version of our current web standards, strip
things back to a sensible subset of what we have now, possibly consider
putting some kind of heavy rate-limiting or quotas on any client-side code
that's ran.

The web is no longer open if you need the funds and backing of a megacorp in
order to implement a renderer that covers the whole standard.

~~~
redwall_hp
Let's dial things back to before we entered the darkest timeline: the fork in
the road where HTML5 happened instead of XHTML2 development being continued. A
stricter, saner markup standard with less overhead to developing a rendering
engine, and shitcan scripts as well.

------
SkyMarshal
Blast from the past. If anyone wants to download NCSA Mosaic and load the
author's gopher site with it, here ya go:

[https://github.com/alandipert/ncsa-
mosaic](https://github.com/alandipert/ncsa-mosaic) (binaries in Ubuntu's Snap
Store, probably in other distros too)

Find out more at:
[https://en.wikipedia.org/wiki/Mosaic_(web_browser)](https://en.wikipedia.org/wiki/Mosaic_\(web_browser\))

------
jamestomasino
For those looking in the comments for places to explore in gopherspace, I
would recommend starting here (use lynx):

    
    
      gopher://sdf.org # large community
      gopher://floodgap.com # a venerable gopher presence
      gopher://bitreich.org # small but very active community
      gopher://gopher.black/1/moku-pona  # my phlog listing aggregator

------
floatingatoll
One tangent from this consideration would be:

What would it take to make Content-Type: text/markdown a reality for web
publishers?

~~~
yati
To start with, deciding what constitutes markdown, i.e., a spec. There are a
bunch of incompatible "flavours" out there.

~~~
vhakulinen
[https://commonmark.org/](https://commonmark.org/) is a good start.

------
achillean
There are around 281 Gopher servers active on the Internet at the moment:

[https://www.shodan.io/report/jhkXWTvL](https://www.shodan.io/report/jhkXWTvL)

Will be interesting to see whether that number shifts in the near future.

------
bborud
My memory is getting kind of blurry on this, but wasn't Gopher heading in a
direction where someone wanted to extract licensing fees from it?

I do remember discovering how WWW had made some leaps forward and promptly
abandoning my project to write a Gopher+ server and instead turning what I was
working on into an HTTP server. Sadly I never bothered publishing the code
since interesting things were happening with the NCSA httpd code at the time
(something which eventually turned into Apache)

~~~
classichasclass
Back in the day, yes, UMN tried this and it probably did indeed kill the
ecosystem. They backpedaled later but the damage was done.

[http://www.nic.funet.fi/pub/vms/networking/gopher/gopher-
sof...](http://www.nic.funet.fi/pub/vms/networking/gopher/gopher-software-
licensing-policy.ancient)

------
rmellow
The nature of this problem (that companies are able to track you) is not so
much a technological problem, but an economical one. Even if gopher were the
only alternative back then, it would have evolved just as HTML/HTTP did to
support ads and tracking.

All a content provider that doesn't want to serve ads and tracking has to do
is not implement it. While content creators are still bound to whatever their
publishing platform chooses to do (e.g. any content on Medium is subject to
Medium's tracking practices), using an inferior technology is simply not a
realistic solution. This is essentially a human issue, technology has little
to do with it.

You want to enable ad-free, tracking-free mass publishing? Provide a free
publishing platform. The catch? Someone has to pay for it.

You don't want to be tracked? Disable javascript. Some sites stopped working?
Oh yeah, tracking you is how they pay the cost (nominal or economical) for
serving you content.

I would see merit however, in a search engine that allowed filtering for non-
javascript friendly content.

------
chippy
There is a public gopher proxy you can use e.g.
[http://gopher.floodgap.com/gopher/gw?gopher://box.matto.nl:7...](http://gopher.floodgap.com/gopher/gw?gopher://box.matto.nl:70/0/revivegopher.txt)

------
rc-1140
Once again, an internet user decides that in order to solve a social problem,
we must move people to an ancient internet protocol for serving web pages
rather than _actually address_ the social problem by dealing with the real
world entities performing the tracking.

------
niftich
As others are saying, HTML isn't the problem and Gopher isn't the solution:
any bidirectional request-response protocol can be used to track clients,
because there's a record of interactions the server can save. Client-side
scripting as now commonly used on the Web increases the likelihood that some
of these events occur despite the user's intent, but hosts can track and
profile you by IP just fine, and if this hypothetical Gopher revival came to
pass, it would also revive an interest in server-side ad serving and log
mining that dynamic ads have long made obsolete.

The two solutions are to: (a) not interact with hosts who track you -- which
is hard to know ahead of time -- or (b) use a one-way broadcast protocol that
leaves no ability for hosts to collect an interaction stream. And this exists
too, from over-the-air television and radio, to teletext [1] and datacasting
[2]. Compare the business models: unencrypted broadcast streams are full of
ads too, but you don't get tracked. Or, the services are encrypted and the key
exchange is moved out of band; you trade a bit of your privacy to establish an
ongoing customer relationship to access gated content.

Of course, broadcast on public airwaves is heavily regulated, and broadcast on
unlicensed spectrum is sufficiently intertwined with and streamlined into
wireless internet to be hidden in plain sight. Despite its technical merits, a
broadcast 'renaissance' of sorts isn't likely to attract a discretionary
audience without a real integrated commercial offering raising awareness --
amateur radio and tech demos don't have universal appeal, but a sleek device
that accesses compelling first-party content in a privacy-preserving way
might. But it's also a technical gamble when more proven solutions are less
risky, and the kinds of players who deliver integrated offerings can deliver
their service over IP with less fuss.

[1]
[https://en.wikipedia.org/wiki/Teletext](https://en.wikipedia.org/wiki/Teletext)
[2]
[https://en.wikipedia.org/wiki/Datacasting](https://en.wikipedia.org/wiki/Datacasting)

------
DebtDeflation
I don't know if I necessarily want Gopher back, but I often dream of returning
to the days when "the Internet" was primarily Usenet, IRC, Telnet, and email.

~~~
teddyh
Don’t forget FTP. So much FTP.

------
lazyjones
The article makes the incorrect assumption that tracking depends on HTML
and/or JS/images. If we managed to revive Gopher, browser makes would soon
build tracking into browsers and publishers would simply track on the server
side like Cloudflare does already
([https://www.cloudflare.com/analytics/](https://www.cloudflare.com/analytics/)).

------
ivan_ah
I remember during my undergrad days, my university (McGill) used to have its
classified ads accessible via gopher. It was pretty popular and fairly easy to
use. Surprisingly, there were quite a few non-technical people on there, e.g.,
posting apartments to sublet. This was in the days of Windows 2k/ME, so people
had lower expectations for user interfaces back then.

------
z3t4
Should at least have a small guide on how to get started, like server
software, client software, how to make a "page" ?

You could block all ip-ranges for known trackers via firewall. And also
disable JavaScript, cookies and media content. Or just surf the web using an
old browser. Serious webmasters still make sure their web pages work in more
then just Chrome.

------
Avamander
My biggest issue with "gopher" is that I don't know how secure it is, how do I
know the connection I'm using is secure and hasn't been intercepted, the
current clients don't show that at all if it's even possible. I couldn't care
less about tracking when the content isn't trustworthy.

------
giancarlostoro
What do Gopher pages look like, are they mostly ascii or is the format weird?
Why did HTML/HTTP become the standard over Gopher? It seems like Gopher could
be capable of doing similar things to the web, just nobody bothered to expand
on it or the standard is frozen in time.

~~~
decebalus1
With the risk of sounding patronizing, the wikipedia page provides answers to
all your questions
[https://en.wikipedia.org/wiki/Gopher_(protocol)](https://en.wikipedia.org/wiki/Gopher_\(protocol\))

~~~
giancarlostoro
Thank you, I appreciate the link. It's been a while since I've looked at that
page, so I probably forgot about some of what is there.

------
cortesoft
When they talk about “putting content on gopher”, what do they mean? Gopher is
basically FTP with the ability to link to other sites. Other than text blogs
or videos, what sort of content would we out on there?

------
Animats
If Github was still independent, a Gopher service for Github would make sense.
Gopher is basically a file server, and Github is a file repository.

------
pbreit
I was wondering if Net News / NNTP / Usenet could address some of the
distributed use cases people are trying to throw at blockchain?

~~~
mrweasel
Blockchain wasn't really on my mind, but NNTP is something I think we should
consider reviving.

Reddit and Facebook has taken over the old forums and mailing lists, but I
feel that those markets would be served equally well, or better by NNTP.

The Reddit redesign makes it clear what direction they are moving in, and I
fear that it will kill of all the interesting subreddits, where people have
real discussions. In it's place will be an endless stream of memes, pictures
and angry anti-Trump posts. All these subreddits will scatter and their uses
left without a "home".

The village I live in has a Facebook group, it's a closed group, so no
browsing without a Facebook account. I'm relying on my wife to inform me, if
anything interesting is posted. It's sad, because it's pretty much the only
source you can turn to if it smells like the entire village is burning or the
local power plant is making a funny sound. All the stuff that's to small for
even local news, or is happing right now.

Usenet would, in my mind, be a create place to host the communities currently
on Facebook and Reddit. They will be safe from corporate control, or shifts in
focus from they "hosting partner", and everyone will have equal and open
access. Spam might be the unsolved problem, but I feel like that is something
we can manage.

I know that a Usenet comeback, with all the hope and dream I have for it,
isn't coming. People don't like NNTP, they like Facebook.

~~~
pbreit
Anyone know the easiest way to get a server up and running? I google “open
source nntp “ with not great results.

~~~
arpstick
I wouldn't really classify it as a "good" nntp server (nor would I recommend
it as it's probably not fully compliant) but nntpchan is one possible route if
you don't mind the imageboard aftertaste. it's not meant for mainline usenet.
I made it out of frustration with INN's feed syntax and because another daemon
that I used at the time was abandoned written in python (it was an abomination
but it worked).

[https://github.com/majestrate/nntpchan](https://github.com/majestrate/nntpchan)

------
jakeogh
Disable JS and this problem fixes itself.

------
akras14
WTF is gopher space? Asking for a friend.

------
MentallyRetired
How am I going to get my grandma to use gopher? Solve that, then you'll have
my support.

------
wut42
Pleroma, the alternative ActivityPub server to Mastodon, has a built in Gopher
service. :)

------
kensai
Ironically, I cannot go to the link he provides. Gopher: are not supported by
Safari. :D

------
mortdeus
why?

------
gcb0
> If you build it, they will come.

People already built it. And not even talking about old gopher. Adblockers are
that now.

People who are technical enough see the benefit and swear by it. We just need
to make it easier to use. Maybe an adblocker add-on with live support and
constant monitoring (and tweaking of the rules) is a produt that you can sell
by the millions?

Canvas fingerprinting? gone. Third party cookies? gone. Auto play media? gone.
etc. Everyone say that privacy is most expensive luxury nowadays. Maybe we
need to commoditize it?

~~~
akho
Ad blocker with constant monitoring looks like a fun absurdist art project.

------
mahkoh
_Every step you take on the web, every site you visit, every page you view, is
used to create and enhance a profile about you. Everything you do is carefully
tracked and monitored._

Bold of the author to openly admit this.

~~~
peterkelly
How is that bold? It's common knowledge.

~~~
krapp
It's bold of the author to admit that they are carefully tracking and
monitoring everyone using this site to create and enhance profiles on them,
particularly given the context of the article.

~~~
rum3
This particular site has no external trackers or analytics scripts.

~~~
krapp
But the claim made in the article is that " _every step_ you take on the web,
_every site_ you visit, _every page_ you view, is used to create and enhance a
profile about you. _Everything you do_ is carefully tracked and monitored."

Obviously, if this is true, then it must be true for that site as well.
Otherwise that assertion is just FUD and hyperbole, and it undermines the
credibility of the argument being made about the scale of the evil of the
modern web, and the necessity of a simpler, non-HTML based protocol to avoid
those evils.

~~~
rum3
All traffic is monitored by the countless agencies around the world, the
datacenters, the isps, spyware, malicious browser extensions, companies
"helping" you by making a backup of your bookmarks and history, and so on, so
it is definitely true for this site aswell.

~~~
krapp
In that sense, it would be true for gopher as well.

But the article is clearly describing javascript, analytics and tracking
within HTML, with the solution being Gopher's "featurelessness." But it's
possible to build an HTML page without analytics and tracking, or even with
non-malicious javascript, so the premise that the only way to escape that is
to leave the web entirely for simpler and more restrictive pastures is untrue.

Not that the point needs to be belabored but it's worth pointing out that the
article opens with a patent falsehood.

