Let’s fast forward to today, yes, we’ve gone overboard all over, but then again, Gopher [i think] doesn’t come standard with TLS, it hasn’t gone through the evolution that http[s] has that makes it a robust and scalable backbone it is today.
What I’m trying to say is that we should not casually float around pipe dreams about switching to ancient tech that wasn’t that good to begin with. Yes, electric cars were a thing already in early 1900s, and we maybe took a wrong turn with combustion engine, but with Gopher, I think we should let the sleeping dogs lie, and focus on improving next version of QUIC, or even inventing something entirely new that would address many of the concerns in the article without sacrificing years of innovation since we abandoned Gopher. Heck, this new thing might as well run on TCP/70, never mind UDP appears to be the thing now.
A lightweight HTTP/TLS subset that severely limits client-side execution expectations would seem to accomplish the same goals.
While repurposing all the amazing tech we've built since the 1990s.
Essentially, "just pass me the bare minimum of response to make Firefox Reader View work."
... but then we wouldn't be able to serve high-value targeted ads, would we?
This assumption might require substantially reworking the hyperlink model of the internet, so that external references to content delivered by third-parties is sharply distinguished from internal references to other pages within the same work.
This "offline archive format" has numerous benefits:
(A) Cognitive benefits of a limited/standard UI for information (e.g. "read on a black-and-white ereader device"),
(B) Accessibility: standardizing on text would make life easier for people using screen readers,
(C) Performance (since accessing everything on localhost),
(D) async access (reaching the "edge" of the subgraph of the internet you have pre-downloaded on your localnet could be recorded and queued up for async retrieval by "opportunistic means", e.g., next time you connect to free wifi somewhere you retrieve the content and resolve those queued "HTTP promises",
(E) cognitive benefits of staying on task when doing research (read the actual paper you wanted to read, instead of getting lost reading the references, and the references's references).
I'm not sure what "standard" for offline media (A) would should target... Do we allow video or not? On the one hand video has great usefulness as communication medium on the other it's very passive medium, often associated with entertainment rather than information. Hard choice if you ask me.
I'm sure such "pre-fetched HTTP" exists already of some sort, no? Or is it just not that useful if you only have "one hop" in the graph? How hard would it be to crawl/scrape 2 hops? 3 hops? I think we could have pretty good offline internet experience with a few hops. For me personally, I think async interactions with the internet limited to 3 hops would improve my focus—I'm thinking of hckrnews crawled + 3 hops of web content linked, a clone of any github repo encountered (if <10MB), and maybe doi links resolved to actual paper from sci-hub. Having access to this would be 80%+ of the daily "internet value" delivered for me, and more importantly allow me to cutoff from the useless information like news and youtube entertainment.
update: found WARC https://en.wikipedia.org/wiki/Web_ARChive http://archive-access.sourceforge.net/warc/warc_file_format-...
How many links from any given page are ever taken? And is it worth network capacity and storage to cache any given one?
Plan 9 OS and the 9P protocol
I'm carving out a subsection for this, as the concept appears to contain a number of the elements (though not all of them) mentioned above. See Wikpedia's 9P (protocol) entry for more:
In particular, 9P supports Plan 9 applications through file servers:
acme: a text editor/development environment
rio: the Plan 9 windowing system
plumber: interprocess communication
ftpfs: an FTP client which presents the files and directories on a remote FTP server in the local namespace
wikifs: a wiki editing tool which presents a remote wiki as files in the local namespace
webfs: a file server that retrieves data from URLs and presents the contents and details of responses as files in the local namespace
I wish it was possible to use a html meta tag to declare to the user-agent that it should show the content in reader view.
Then sites that only want to provide text and images and no ads etc could be implemented without any CSS at all and minimal amounts of markup and still be nice and readable on all devices thanks to user-agent reader view taking care of the presentation.
And I don’t see any real benefit in changing the defaults either. Most sites want to provide custom CSS. The point of reader view is to make simple articles consisting of text and images comfortable to read on your specific device. Device/resolution specific defaults would be at least as painful, and probably more painful, to override for every site that wants to use custom CSS.
Whereas an explicit meta tag telling the user-agent to use reader view is entirely feasible. Such a meta tag does not interfere with existing pages, requires nothing extra from sites that don’t want it, and would still fall back to something that works for all end-users whose user-agents don’t implement it (because browsers ignore meta tags they don’t understand so those browsers would render the page in the exact same way they would any unstyled page). And on top of that this theoretical meta tag would be really easy for browser vendors to implement — they parse the HTML, see the meta tag and trigger the reader view mode that they have already implemented.
What would that constraint look like? Is animation and interaction prohibited there, for example?
The lesson from the modern web being, if you give web developers a toolbox, they'll figure out how to build a surveillance system.
The answer would seem to be that we should be far more careful what tools we allow to be used.
(Note: Not saying restrict all the web this way, but if one wanted to build parent article's wikipedia-esque info web)
If we allow all those, it's just the modern web again.
I think it's generally useful to look at email here, where only a small subset of html—primarily photos + text—are reliably supported across clients.
No third party connections allowed. As for the originating server, they already know you requested the page, no?
Alternatively/in addition, the user agent can treat embedded assets like textbooks do, and present them all as numbered, boxed, and captioned figures.
I never understood why people have a problem with normal contrast. Low contrast is hard to read.
The vast teeming hordes of Kardashian fans and youtube addicts would not. They like where we are now.
You know what I would love to see is a comparison chart for all Historic SGI machines, and their MSRP from their heyday to comparable compute capabilities in modern devices.
I remember when we were moving ILM/Lucas to the Presidio - and they were throwing our massive SGI machines, which were hundreds of thousands at the time - but some of them were turned into kegerators...
A parallel protocol and hypermedia format that's restricted enough to prevent tracking isn't going to attract everyone. It's going to attract the subset of users who care enough about stuff privacy to give up "rich Web" features like single page applications and animated HTML canvas elements in return.
That's not a group that's likely to start immediately demanding to to build in the features needed to create infiniscrolling Pinterest feeds. And it's also going to be a much restricted group, meaning businesses won't stand to profit much by pushing for it. So there might not be any economic incentive to do it.
At least, that's how it might be at first. If it remains a nice place to be for 5 years, I'd call it a decent run. 10, and I'd be ecstatic.
The modern web cannot do that.
(I’m not convinced gopher can either — I think NNTP was actually better at that — but like the thought experiment.)
I yearn for a future where knowledge is distributed in a practical, organizied way such as via Gopher or similar, along with tapping the full potential of email as a one-to-many, many-to-many, and many-to-one communications medium where data is distributed and address ownership is maintained.
We already have that, it's called the World Wide Web.
You seem to be under the impression that the popularity of the web is due to centralization by and commercialization by corporate interests, but that isn't the case. It's still entirely feasible to distribute knowledge in a "practical, organized way" using HTML and HTTP, and people do use it for things besides the three social media sites people now mistakenly believe comprise the entire web.
I'd start nearly any session with Gopher, and it would end in either some web pages or an FTP server. Gopher was the go-to because it was organised unlike the rat's nest of links you had to deal with on the web.
This wasn't really fixed until the advent of big centralised search engines (and even then, the early ones weren't worth damn).
It'd be nice to have a quieter place like old gopher, but yeah the real increase in privacy it would bring is somewhat illusory.
What do users get in the end ? half of the web is still tracking them. And many of the big guys still track them.
Not enough if you ask me. That's what makes it so difficult.
So let's solve that: let's build a search engine that let me filter sites according to privacy, Ah and be perceived as good as google - because in today's world, in many jobs, you cannot give up on an information advantage.
That's kind of an impossible mission.
Gopher is an alternative to HTTP, not HTML. HTML can be used with gopher, and since documents served via gopher can be accessed by URLs, it's hypermedia features are usable over gopher. (But gopher is read only, unlike HTTP, so even with JS—since even though it uses URLs, HTML’s built-in form behavior presupposes HTTP and it's verbs—to do a gopher call for a form you'd only get the equivalent of GET forms.)
In principal at least; other than Lynx and an extension for Firefox, I don't think any current browser has support for Gopher, anyway.
> If you somehow managed to pull enough users to Gopher, they'd just write Gopher Chrome and start adding new features that conveniently allow tracking into it
You can do IP based tracking on any TCP/IP protocol, and if the clients have JS (or other scripting, but if you make it alternate channel for HTML, JS is the obvious choice) support and aren't a monoculture, and have differences detectable with JS, you can do client fingerprinting on top of that. Yes, even with gopher protocol alone, as long as your server can treat certain crafted requests specially.
Alright, let's talk to our representatives and ask them to consider taxing tracking and data collection.
But then again, the harm of advertising and surveillance capitalism is a thing. So is the focus on data hoarding, vendor lock-in, favoring prettiness over utility.
I really wish we could run a parallel web. One optimized for utility, where data and content is available in maximally useful form, where users are in control of their rendering, and are free to use whatever automation they want. Not a replacement web, just a for people who are willing to jump through some hoops in order to avoid the crap that's on the mainsteram one.
I don't know much about Gopher yet (I'm starting to learn now), but maybe such a parallel web could be developed there?
And nobody uses it.
Freenet is a distributed content-addressed store, much like IPFS, except that rather than you directly fetching content from other users who have specifically "pinned" that content (outing you both as interested in the material), the request hops over multiple users, leaving it cached (in an encrypted form, for plausible deniability) over their machines, so that it's very hard to tell who actually requested it.
usenet was also great until spam went nuts.
FB for example is US-based company. Last time I checked you are not forced to accept outside money, get VC rounds or go IPO.
Mark chosen to go the "American path" that is capitalism to the maximum so of course I will lose an argument over why he is trying to maximize profits. But nothing stopped him from building sponsorship agreements with i.e. fortune 500 corps, instead of building bidding platform ala Google.
I'm pretty sure if you would signup fortune 500 and have for example 500 rotating banners, it would give you enough founds to run operations and pay every employee $150,000 salary. Plus having exactly ZERO tracking cookies and ZERO malicious following you JS script. Its quite possible given FB size and reach, but again "this is America, this is business."
Maybe what we need is a search engine that penalises JS and tracker use.
Since it's on tor there's no need for evil centralization for DoS protection since it's baked into the protocol. Additionally your onion vanity name you brute forced cannot simply be taken away from you if there's political or social pressure on your registrar or above.
No, we don't need gopher. We need people to stop running third party code like it's some normal thing. We need devs to stop making websites that don't render unless you run their code.
It's really not that hard to run a hidden service. No harder than running a webserver. And everyone's home connections are fast enough now.
This is why I think that having a search engine that would search only in the 'cool' (read: old-style) web.
I don’t think that social media has “stolen” many blog readers, rather the number of people on the web increased by helluvalot.
The fact that you can share it to help others as well makes it a blog.
I think the USP could have to be something about "reading friendly" or "consistent reading experience" etc.
(it's very far from a lot of CSS features, but it works)
The problem with that is that it requires restraint, and I just don't see much of that in web content creators. In an environment where everyone is competing for attention and clicks and tracking, how do you expect people to willingly do less?
The only approach I could possibly see working is picking some subset of web browser functionality, and branding it as the Next Cool Thing, somehow. Maybe some way to badge or advertise pages, like "JS-free", or "KB-fast" (all content < 1.0MB).
You can still do some pretty amazing designs in a million bytes of HTML+CSS.
- dark/light auto or manual theme switching
- syntax highlighting ( https://prismjs.com/ ), because in-code version makes the text code in `pre` unreadable.
A cool thing is that you can build a server in an afternoon starting with nothing more than your favorite programming language, some TCP server docs, and the wikipedia page.
I’d love to see people build some gopher sites to do stupid and crazy things. Interactive fiction over gopher? Sure! SQL to gopher gateway with ascii viz? Awesome!
Everyone should have a gopher hole... probably firewalled off of any production networks.
It's not that hard. Just iterate on any idea old that's even slightly more appealing to hack on than a full-blown browser. That includes... let's see... nearly anything!
Then just be smart and dedicated about specifying the behavior of the new thing and figuring out workarounds for the awful parts.
Ian Hickson did it.
It wouldn't solve everything, but would make a nice playground that might be taken interesting places.
It's not that gopher: is some novelty that no-one has ever adopted. It's that a WWW browser nowadays lacks quite a lot of things that used to be commonly built-in to WWW browsers. gopher: scheme support has gone completely, as has news: support. ftp: support has been reimplemented several times, and is significantly poorer now than it used to be.
(Yes, it's accessible over Gopher too, just to be difficult)
It's cross-platform: I use the statically linked Linux binary. Loads very quickly, like Dillo. Author commented on HN once on something unrelated and I stumbled across it on his site. Doesn't do images (yet) and I just discovered that it doesn't let me highlight text (bummer) but overall... nice client to have.
Hopefully the author (runtimeterror) continues to work on 'Little Gopher'.
This is what OP's linked gopher page looks like:
The article's author: true to his word. Still keeping his gopher page current - with the latest post updated 11jan2019.
 I wrote about the technical differences between http and gopher http://boston.conman.org/2019/01/12.2
Conclusions I got is that the thing had crazy fast loading (it even weird when you can no longer distinguish local for server), that it would be actually quite enjoyable coding experience has it's suddently is just 50% of the work and that the rendering of web pages in terminal browsers is actually really nice.
Gopher can easily serve HTML content (and any other content type, too)
I made a Gopher HackerNews proxy a few years ago, you can see it in action by running
HTML is not a protocol, that's HTTP.
No JS, no thirparty content, only html5+, css3+, text, images, videos, audio and other stuff.
However, even without the help of those headers, one could also have some discipline (perhaps also respect for users) and refrain from putting tracking and other undesirable things onto their website.
This doesn't seem to be a technical problem, so a technical solution - especially an opt-in one - probably won't help.
It's time to develop a new independent web-zero with no sugar. Use a mode for Firefox and punish the cruft and bloat.
I think you could argue that gopher has few practical uses, and while (as a Gopher user) I don't personally agree, I think the position is defensible depending on what your use cases are. But Gopher is a good example of how a minimal protocol can still offer services of some reasonable basic functionality, and I think that's worth something more than reminiscence.
Its the COBOL of page description languages. Its truly horrible, its not like HTML was just this minor improvement, its a complete conceptual shift. GOPHER is just a tab delimitated file, so excel is the best editor for it.
The first character is the type of thing, it can be a submenu (1), a text doc(0), a gif (g), an image (I), a binary file (b), a bin-hex file (4) or it can tell you the name of a mirror server so you can load balance?? (+).
How do you take form input like a a street address? You can't, its one way data transfer.
Gopher+ had ASK forms which were much like HTML form controls but were, like much of Gopher+, complex to implement and not widely adopted. Some recent clients and servers support arguments over stdin like POST requests, but this too is not widely implemented.
The web is no longer open if you need the funds and backing of a megacorp in order to implement a renderer that covers the whole standard.
https://github.com/alandipert/ncsa-mosaic (binaries in Ubuntu's Snap Store, probably in other distros too)
Find out more at: https://en.wikipedia.org/wiki/Mosaic_(web_browser)
gopher://sdf.org # large community
gopher://floodgap.com # a venerable gopher presence
gopher://bitreich.org # small but very active community
gopher://gopher.black/1/moku-pona # my phlog listing aggregator
What would it take to make Content-Type: text/markdown a reality for web publishers?
Markdown is a textual format intended to result in HTML anyway, and it includes the entirety of HTML already in its spec.
Will be interesting to see whether that number shifts in the near future.
People already built it. And not even talking about old gopher. Adblockers are that now.
People who are technical enough see the benefit and swear by it. We just need to make it easier to use. Maybe an adblocker add-on with live support and constant monitoring (and tweaking of the rules) is a produt that you can sell by the millions?
Canvas fingerprinting? gone. Third party cookies? gone. Auto play media? gone. etc. Everyone say that privacy is most expensive luxury nowadays. Maybe we need to commoditize it?
I do remember discovering how WWW had made some leaps forward and promptly abandoning my project to write a Gopher+ server and instead turning what I was working on into an HTTP server. Sadly I never bothered publishing the code since interesting things were happening with the NCSA httpd code at the time (something which eventually turned into Apache)
All a content provider that doesn't want to serve ads and tracking has to do is not implement it. While content creators are still bound to whatever their publishing platform chooses to do (e.g. any content on Medium is subject to Medium's tracking practices), using an inferior technology is simply not a realistic solution. This is essentially a human issue, technology has little to do with it.
You want to enable ad-free, tracking-free mass publishing? Provide a free publishing platform. The catch? Someone has to pay for it.
The two solutions are to: (a) not interact with hosts who track you -- which is hard to know ahead of time -- or (b) use a one-way broadcast protocol that leaves no ability for hosts to collect an interaction stream. And this exists too, from over-the-air television and radio, to teletext  and datacasting . Compare the business models: unencrypted broadcast streams are full of ads too, but you don't get tracked. Or, the services are encrypted and the key exchange is moved out of band; you trade a bit of your privacy to establish an ongoing customer relationship to access gated content.
Of course, broadcast on public airwaves is heavily regulated, and broadcast on unlicensed spectrum is sufficiently intertwined with and streamlined into wireless internet to be hidden in plain sight. Despite its technical merits, a broadcast 'renaissance' of sorts isn't likely to attract a discretionary audience without a real integrated commercial offering raising awareness -- amateur radio and tech demos don't have universal appeal, but a sleek device that accesses compelling first-party content in a privacy-preserving way might. But it's also a technical gamble when more proven solutions are less risky, and the kinds of players who deliver integrated offerings can deliver their service over IP with less fuss.
 https://en.wikipedia.org/wiki/Teletext  https://en.wikipedia.org/wiki/Datacasting
(Disclaimer: I maintain gopher.floodgap.com)
Reddit and Facebook has taken over the old forums and mailing lists, but I feel that those markets would be served equally well, or better by NNTP.
The Reddit redesign makes it clear what direction they are moving in, and I fear that it will kill of all the interesting subreddits, where people have real discussions. In it's place will be an endless stream of memes, pictures and angry anti-Trump posts. All these subreddits will scatter and their uses left without a "home".
The village I live in has a Facebook group, it's a closed group, so no browsing without a Facebook account. I'm relying on my wife to inform me, if anything interesting is posted. It's sad, because it's pretty much the only source you can turn to if it smells like the entire village is burning or the local power plant is making a funny sound. All the stuff that's to small for even local news, or is happing right now.
Usenet would, in my mind, be a create place to host the communities currently on Facebook and Reddit. They will be safe from corporate control, or shifts in focus from they "hosting partner", and everyone will have equal and open access. Spam might be the unsolved problem, but I feel like that is something we can manage.
I know that a Usenet comeback, with all the hope and dream I have for it, isn't coming. People don't like NNTP, they like Facebook.
Personally, when I needed NNTP I sprung straight for INN, but be prepared to educate yourself.
Bold of the author to openly admit this.
Obviously, if this is true, then it must be true for that site as well. Otherwise that assertion is just FUD and hyperbole, and it undermines the credibility of the argument being made about the scale of the evil of the modern web, and the necessity of a simpler, non-HTML based protocol to avoid those evils.
Not that the point needs to be belabored but it's worth pointing out that the article opens with a patent falsehood.
* if you use traditional web browsers.
I've been moving more and more of my browsing over to Tor.
Both Reddit and HN can be browsed, though the former requires JS to fully function. (A persistent problem across the web)
I can't do all my browsing on Tor, but I can do a substantial chunk. Conversely, I can maintain "clean" profiles tied to my real name that seem to simply check email, read the news a bit, and check the weather.
I tried using QubesOS for my primary (desktop-ish) laptop. I7, .5TB ssd, 32GB ram for $500 on ebay. And I would have maintained using it if it hasn't been for research into SDRs.
I needed the performance from USB for SDRs, and the way Qubes does it sends all USB data to a USBvm to prevent against all sorts of bad USB attacks (badUSB, rubber duckys, usb-gsm gateways, etc).
But if I wasn't doing SDR work, you can bet on it that I'd be using Qubes.
It might be more economical to buy an old chromebook or something to repurpose for TAILS rather than buy a really souped up laptop for Qubes.
Then again, I'm not doing anything evil so, my threat model is a little looser. (I don't think they're out to get me specifically, but will happily siphon up whatver they can get)
If you do things like resize your window or install nonstandard applications, that could make you a unique Whonix user, but not reveal your true IP.